I’ve moved to working on the rendering engine, which will be my main focus in the next 3 months. There’s still some fixing needed on the sculpt tools but I won’t do any big changes anymore there. The planned rendering development consists of four parts.
The shading code will be refactored to make a clean separation between materials and lamps, and some corrections will be made to the current lighting calculations. But mostly the intention is a more modern system to design materials, one that is not so much geared to direct lighting from lamps only, but works well also for indirect light. Nodes will also be central to the way materials work rather than something glued on top of it. It’s basically a merger between physically based rendering materials that are design for advanced lighting algorithms, and production rendering materials that can do things like output passes or use some tricks for speed.
The current design is on the wiki. We most likely won’t implement the full thing for Durian, but the intention is to implement the foundation and the parts that we use ourselves. Improved raytracing can then be implemented by others later.
Further people have been asking about OpenCL for GPU acceleration. That is something we’re not planning, it wouldn’t be even remotely possible given the time constraints. Also the recently released Open Shading Language by Sony Imageworks would be good to have, but a shading language is not something we can spend time on now, though I think what they are doing is in the same spirit, bringing together physically based and production rendering. I’m looking at their design to see how compatible we can be so someone can implement support for this later, but it looks quite similar, the big difference being of course that we are building a node system and they’re making a shading language.
Indirect Diffuse Light
There’s already an incomplete implementation in trunk based on the approximate AO algorithm. We’ll try to extend this method to do proper shadowing. This could be done using either the recent micro-rendering algorithm (a bit simpler and more flexible) or the pixar technique (proven to work). The main challenge here is keeping performance high enough, it is expected to be quite a bit slower, but hopefully still faster than raytracing, and working on scenes that don’t fit in memory.
Memory usage is a big problem, especially when rendering in 4K. The plan is that many data structures in the rendering engine will be cached to disk and loaded only when needed. The main implementation issues here are how to do this threaded efficiently and trying to avoid latency killing render performance. There’s many things that could be cached to disk, hopefully we can implement it for most of these:
- Image textures
- Shadow maps
- Multires Displacements
- Smoke/Voxel data
- SSS tree
- Point Based Occlusion/GI tree
Per Tile Subdivision
This is probably the most complex one, we want to subdivide meshrd per tile to render very finely displaced meshes. One challenge is that this requires a patch based subdivision surface algorithm that does not need the full mesh in memory. The existing subdivision library could be modified to do this, but it would not be very efficient. Another possibility is to integrate the integrate QDune Catmull-Clark code.
Other problems are grid cracks, though perhaps these are not too difficult to solve if we take them into account from the start. Another concern is the filtering of multires displacements, this is quite a complicated problem, if it doesn’t get solved we’ll need a simple workflow for baking multires to displacements maps. Existing displacement code also needs to be improved to do filtering properly.
Other issues are how to deal with threading, sharing diced patches between threads, and distributing objects/patches across tiles efficiently. Will be a fun challenge :).
For Interested Developers
Of course these are just plans, we’ll see how far we get, though I hope we can do all of them. If you’re a developer interested in helping out, the disk caching would be a good project to pick up as it doesn’t involve that much knowledge of the render engine. Also approximate indirect diffuse lighting could be a good thing to help on, as most of the data structures are already there, the point cloud is already built, it is mostly a matter of implementing the rasterization.