As mentioned in my last blog post, there were four big changes planned, shading system refactor, indirect lighting, per tile subdivision and image tiled/mipmap caching. As usual, in practice those plans can change quickly in production. So here’s an update on the plans and some results.
For indirect lighting, the intention was to use a point based method. I found the performance of these quite disappointing. The micro-rendering method works well only with low resolution grids, but this is noisy, and when increasing the resolution suddenly performance is not as good any more. That leads to the Pixar method which is somewhat more complicated. This can scale better, but the render times in their paper aren’t all that impressive, in fact some raytracers can render similar images faster.
So, I decided to try out raytracing low resolution geometry combined with irradiance caching. With the new and faster raytracing code this seems to be feasible performance wise, and the code for this is now available in a separate render branch. Right now we’re not entirely sure yet performance is good enough, though we may be close to about 1 hour / 2k frame on average. This is with irradiance cache, 2k rays per sample, low resolution geometry, one bounce, one/few lights and tricks to approximate bump mapping. These are the restrictions that we’ll probably work with.
A prototype implementation of per tile subdivision is also there for testing now. It’s very early though, doesn’t work with shadows, texture coordinates, raytracing, etc, but it’s already possible to do some tests.
Regarding the shading system, I worked on node based shaders for a while, however it became clear that the benefits for durian wouldn’t be that big and that it was taking too much time. Being able to define materials as nodes more correct & flexible would be nice but also not help rendering better pictures much quicker, in fact converting our existing materials would take more time probably.
The code for this is not committed yet, and will probably have to wait until after Durian. However the core render engine code has been and is being further refactored so that we can easily plug in a different material systems, which is a big chunk of the work.
Image caching I haven’t really started on yet. I’m looking into using the OpenImageIO library, though it’s not clear yet if this would be a good decision, as it would introduce another image system in Blender and duplicate much of the existing code. Besides that images aren’t the only thing that would benefit from this, there are also for example deep shadow buffers which are not covered by OpenImageIO. Reusing code could save us a lot of time though.
Further plans are:
- Try to get indirect light rendering faster, make it work better with strands, bump mapping, and displacement.
- Work further on the per tile subdivision implementation to get it beyond a prototype.
- Find out if we can use OpenImageIO, and either use it or implement our own tiled/mipmap cache system.