Sintel, the Durian Open Movie Project » Blog Archive » Features Wishlist


Features Wishlist

on September 10th, 2009, by Brecht

A first version of the features wishlist is now available. Interested developers are invited to pick up items from this list. For the non simple features, it’s best to contact me or the module maintainer when you start working on it, so design and implementation can be discussed in advance. Note that this list can’t be considered complete yet, it will keep evolving throughout the project.

For the rest of the month September, developers here at the Blender Institute will still be occupied with Blender 2.5. This is the version that the artists will use, which will definitely help getting it ready for real word usage. In October work will start on Durian features, combined with bug fixing or adding missing things in 2.5. But, that doesn’t stop other developers from working on interesting features already.

Some of the features on our lists are already in development:


Further there’s more features currently in development, which we didn’t put on our wishlist because they are already being worked on:

Smoke Simulation


43 Responses to “Features Wishlist”

  1. kram1032 Says:

    Yay ๐Ÿ˜€
    Didn’t yet see the hair cloth simulation… All the rest wasn’t new to me, but they all are amazing! Great works ๐Ÿ˜€
    Also can’t wait to see them in action in Durian. ๐Ÿ™‚

  2. ToastBusters Says:

    Thank you so much for putting the spline IK in there.

    Hair interacting with cloth has been on my wish list for a long time too. Keep up the great work guys!

  3. Oskar Says:

    The smokey-thingy looks awesomely-badass!

  4. brecht Says:

    @ToastBusters: it’s not about hair interacting with cloth, but rather about using the cloth solver to simulate hair. I’ve updated the post to make this clearer. Interaction with cloth should not need any particular setup if the hair is being simulated, as long as the cloth is a collision object. I’m not sure if this works now but if it doesn’t we may end up fixing it, it seems mostly a matter of running the simulators in the right order.

  5. francoistarlier Says:

    would be nice to be able to add a composite inside the sequencer (as we can do with scene) so we could take advantage of luma wavelenght control, chroma viewer.
    tool to help see what the color grading looks like are really missing in composite.

    just a thought though

  6. kram1032 Says:

    brecht, shouldn’t they more or less react on each other on the same time?
    (If that’s at all possible, programming-technically^^)

  7. dusty Says:

    Is anyone working on the sculpting tools yet?

    I appreciate that it might not happen with this project, but what i would like to see is some 2.5d coding similar to that of ZBrush.

    I mean ZBrush uses some pretty nifty code. I can only guess how it works. But the only processing visible poly’s and the ability to approximate sculpting on a higher level, I think is done by treating the entire model at a lower sub level mesh and then only processing the higher levels on the actual part being sculpted with a brush stroke.

    So when you rotate the model you don’t rotate the higher res one, just the lower res and then only redraw the bit that’s visible.

    Unnnnggnn this is so hard to explain, but I think it’s more complex then I could ever know.

    Anyway, the rest of the wish list and WIP’s are looking VERY exciting! keep at it!

  8. Mats H Says:

    WOW, I love Blender and its community! The idea to use open content projects to develop the software is so extravagantly brilliant. Can’t wait to see this being implemented into Blender.

    Keep rocking!

  9. brecht Says:

    @kram1032: if the influence is only cloth -> hair there is not much point in that I think. For bidirectional influence you could interleave the solver sub frame steps as well, but I doubt for hair-cloth this would make much difference in practice.

    @dusty, we have plenty of strategies to improve sculpt performance, we’ll run out of time before we run out of ideas. Regarding ZBrush, I very much doubt 2.5D pixol stuff really explains that much of the performance, this used to be important but from what I’ve seen they’ve been moving away from it since version 2.

  10. dusty Says:

    Yeah, I think my understanding of it is slightly off anyhow. But even so, I think the whole screen depth thing is still key to how Z works with so many polygons. That and the fact that behind the scenes they still rotate and move the model based on the low poly sub levels, and make it look like the high poly levels on the screen. There is a lot of approximation code in there i think. And of course the brush strokes are the only part I think that is being processed on the higher levels, the rest is all trickery that works well. (that’s a really stupid statement, I apologise.)
    You are the one person I have absolute faith in getting Blender close to that level. But I understand it’s a dream of mine, as you guys are soooooo busy.
    Ton will probably read this and think will that man please stop going on about ZBrush! ๐Ÿ˜‰
    … actually, that’s a good idea. being quiet now. Sorry. ๐Ÿ™‚

  11. mcreamsurfer Says:

    I would definately agree with francoistarlier:
    I think it would be very useful to be able to send a strip from sequencer to the nodes and back. It’s already possible to send a scene with nodes to the sequencer so why not the other way around too.

    I really appreciate all stuff being developed for blender without any of my requests ๐Ÿ˜‰ Of course I could think of many, many features I’d love to have but in all these years I have been more than glad to have a program like blender and all it’s features and that even for free. Blender is an awesome thing, not just because of it being a free and cool program, but also because of its great community (coders, artists, ‘teachers’, and many more)…I just love that.

  12. francoistarlier Says:

    about hair simulation, I don’t know if this could be any help but 2 years ago, I’ve meet with some people at the CNRS Paris ingener working on Hair simulation. It was looking really really realistic (l’orรฉal sign with them right away).
    Here are the papers :

  13. Lancer Says:

    These are looking great! I would love it if the bevel would get a rewrite though. A lot of my Maya friends judge Blender poorly for the one reason; that using Blender Bevel introduces triangles (e.g. the corners). If bevel could only utilise 1-to-3 edgelooping algorithm (currently one edge becomes two), then everything would stay as quads.

  14. Valdemir Pedro da Silva Says:

    We use Blender 3D in our company.

    Is ther plans to implement a panel where I can see a lot of internal pre-made textures like 3D studio?

    Blender help us a lot in our job, and this will help more.

    Sorry my english, I hope you understood me. -(

  15. Dwayne Says:

    how ’bout – ability to set a user pref for the strip blend mode in the Sequencer? I’m working on lots of little animations with .psd titles, and I have to change every one from “Replace” to “Alpha Over”.
    This is awesome, nonetheless. Yay for blender. Spread the word for the Pre-Sale campaign everyone!

  16. Loolarge Says:

    Could the curve twist fix be applied to bone segments as well, to avoid flipping? That would be awesome!

  17. Campbell Barton (ideasman42) Says:

    @Loolarge, yes, there are a number of options so far I have minimum twist and bezier tangents.
    However tangents generally give a lot of flipping so Ill try create a tangent per knot and then do minimum twist between each segment.
    Although I don’t want to spend too long on this, would like to get back into 3D painting tools.

  18. Grafixsuz Says:

    ideasman42 said “would like to get back into 3D painting tools.” Is there nothing you can’t do man?

  19. DrD Says:

    I think its important to note that the wanted features list is small and there is nothing extra ordinary on it. This is a great indicated of the quality of the features Blender currently offers.

  20. Thatonejondude Says:

    Ok let me start off by saying this…

    …Just my oppinion as an animator.

    I use IPO curves, but I HATE THEM!!

    …If you guys could develop a way to set the ease in/out with out leaving the action editor, I think that would make the animation process to much easier on the animator. I’m not saying get rid of the IPO editor, because there are some things that you probably couldn’t do with out it. But as far as easing in and outs, I think it would be so convenient if you could just set the easing in/out within the action editor.

    Does this sound like something you guys would use in the Durian Project? I would definitely use it because I love the Action Editor and I hate leaving it.


  21. Aligorith Says:

    I’ve been experimenting with a few ideas for this which you may see sometime in the near future. (Hint: traditional spacing diagrams look pretty attractive as widgets somewhere ๐Ÿ˜‰

  22. Gianmichele Says:

    @Thatonejondude: I suggest you download a version of Blender 2.5 and start playing with the new F-Curves editor (yes, you read it right).
    You can now use Euler and set Rotation order. Not to talk about the new NLA… OHHHHHH. Aligorith is doing an insane amount of work and believe me (I’m an animator too), it’s really super!

  23. Agus Says:

    Brecht: The list dosen’t show the micropolygon or per tile subd rendering.
    Did you or somebody else started implementing some aproach to this needing?

    I am doing a researching about it and i would like to participate in the development or may be make a start, i am experienced 3d developer but now i am getting more familiar with blender code.

  24. brecht Says:

    This is not the list of our development targets, those are more ambitious (see the blog post about that), it is a list of things we are asking help with now. I didn’t add things to it that are already being worked on, or for which I already know who will work on it. Further it is also incomplete still.

    @Agus: we do intend to work on that (not necessarily micropolygons, but definitely per tile subdivision). I didn’t ask for help on those yet because I still have to split up the tasks and get a picture for the design, the list is intended to have clearly delimited projects. But feel free to mail me about any feature you are interested in working on, regardless if it is on the list.

  25. Thatonejondude Says:

    Wow! F-curves are amazing to work with!! ๐Ÿ˜€

    I like! haha

  26. Matt Says:

    Hi guys,

    I started taking a few lens flare example shots with some different lenses to see how they react, might give whoever wants to work on this some inspiration/ideas. I might add some more too, got some other lenses to try as well.


  27. max Says:


    but what about the modeling features ? like multires optimization etc. ?

  28. MTracer Says:

    I’m a little… not upset… but a little negative we’ll say, about the multires modifier. It isn’t really a modifier, it’s just in the modifier stack. I have an idea that should be easily doable with some simple tweaks.

    Basically, just let the modifier hold the layer information. All the modifier itself does, is change the layer of multires manipulated by the modifiers below it. You would have to put the multires data somewhere other than the modifier, the modifier itself would not be able to add new subdivision layers or edit them. It would just change the which layer is currently in use by the stack. Actually, maybe just leave the current modifier as is, and add a new one – something like “Multires Stack Modifier”.

    For example, the top multires modifier would do just as it currently does, then another multires modifier selects a lower level of subdivision. An armature modifier below that would edit as though sculpting at low res. Then, a third multires modifier changes the subdivision level higher, propagating the changes that the armature modifier made to the other levels. A lattice modifier than edits that data. A fourth multires modifier changes to whatever subdivision level you want in the viewport, by means of driving the subdivision level with a control bone. Finally, a fifth multires modifier, set to only affect the render, bumps the resolution to maximum.

    So basically, just let the modifiers edit different subdivision levels, and change them within the stack.

  29. bob2 Says:

    This caught my attention: “Make all nodes work in percentages relative to the image size”

    Shouldn’t it be specified according to world measurements?

    if you do it with percentage of the final image, weird things could happen when you zoom.

  30. brecht Says:

    @max: check my earlier comment.

    @MTracer: you mix up features & internal implementation, makes it hard to understand which functionality you want.. But yes, support could be added for multiple multires modifiers in the stack, though making this work well is not easy at all, because that means the multires/sculpting code would have to start taking account all the modifiers inbetween.

    @bob2: This is about compositing nodes, when you work with images. There are use cases for world space too, but it depends on what you want to do, I think doing things in image size is the right default for compositing.

  31. leodp Says:

    Stereo camera would be nice, I guess it is going to be quite easy to implement, you only need cameras separation and convergence (and a way to store the rendered images, and a way of deciding which camera view to enable during development…).
    Maybe full resolution rendering cannot be included, but even a lower resolution 3D video can have some impact.


  32. dusty Says:

    Sorry!! Forgot to add in my earlier comment!


    What I’ve seen of 2.5 so far looks awesome!!!!!

  33. Matt Says:

    Scratch my earlier post about lens flares, this page kicks its arse!

  34. joeri67 Says:

    With 3d (stereo) TV being the big new thing one should wonder if it isn’t about time to stop mimicking faulty film camera lenses…

    Or at least think about what this will mean for 3dTv.
    Where in 3d space is the lensflare? Where in 3d space is the border distortion?
    All current created “hip” video or film distorded movies will be impossible to convert to 3d display media rendering itself useless over time ๐Ÿ˜›

    Stop making 2d features as the future is stereo 3d.

  35. joeri67 Says:

    @bob2 who wrote “Shouldnโ€™t it be specified according to world measurements?”

    If you apply a blur to an image you’d want the blur effect to feel the same whatever the input image is scaled to.
    So If I set a blur of 4 pixels on an image of 256×256 the blur should be 8 pixels on a 512×512 image. Or maybe even something smarter so the blur ‘looks’ the same, I’m not sure if that value is liniear on all filters.

  36. jpbouza Says:

    This is great!

    About MultiRes, will some kind of compression method be added to multires data? Cause when you work with 4 million polygon models the blend file gets really big, like 500MB, and if we are going to be able to handle more polygons with 2.5, the file size could get huge!

    With such a big file size, I think it is more practical to have a 100mb displacement map than all the multires data in your model. My guess is that you will be implementing some kind of work around for this issue, wonยดt you?

  37. MTracer Says:


    Vector displacement! Instead of a grayscale affecting displacement along a normal, make the RGB values represent three different axis of displacement. You could have:

    R: Normal of mesh
    G: U of UV space converted into 3D space for the same area on the model
    B: V of UV space converted into 3D space for the same area on the model

    An thus, so long as topology remains the same, any shape can be achieved through displacement maps. Even complex shapes, like a hair stretching out and curving over the unsubdivided surface, could be created.

    Just an idea.

  38. brecht Says:

    @jpbouza: yes, this is an issue we’ll have to solve.

    @MTracer: it’s already possible with the displacement modifier. But you are limited by mesh resolution, and even in renderman with adaptive subdivision the scenario you describe would be impractical to use beyond a tech demo.

  39. joeri67 Says:

    more inspiration…

  40. sozap Says:

    Hello !

    Blender is getting so great , I wish you all the best for Durian !!

    Have you think about a kind of proxy thing (like in the sequencer) for dupligrouped objects ?
    for example, you have a scene with various high polys/complex objects/characters.
    Maybe while you’re working on animating a character, or else, you want to see others in a lowpoly/simplified version, and also the rest of the scene.

    At render time , you can choose to get back all the final version of the objects, or to keep it like you’ve set it in the 3D view. That allows really fast preview renders of complex scenes.

    I’ve heard that this technique is used in some big studios.
    I think when you have a big project it’s a must . Very complex scenes can become as light as a video game to work with.

    The bad side of it is that you have to do extra work to make the lowpoly versions of each objects , but that’s not so much.

    I’ve managed to make a python script that do this ( If you want I can send it to you) .But it’ll be far better if it’s included directly in the group system. Or better included/done at least.

    when you say “Proxies will need to be improved, for example to support multiple instances of a character with variations. ”

    I think about for example at a warrior in different versions, like on with an helmet, on without armor ect…

    In fact I think both features are interesting.

    Thank you very much for your work, it’s already so amazing all the things that blender can do !!


  41. Jayden Beveridge Says:

    Hello! I had an idea for the real-life camera recording. Why don’t you hook up an accelerometer to blender, and use that data, attach it to the camera, and then you could just tape it to a real camera, move around and see on screen what is happening. This is a good idea. If you want to please use it. If you could, you could hook up a small screen, attach it to the thing that you are using as a camera with the acceleromerter attached to, hook that screen up to your computer, and then, while moving around, you can see whta is happening, like a real video camera. Good Idea. Shouldn’t be too hard.

  42. mwd Says:

    Oh, I see people going wishโ€“crazy here, so I’ll add some wishes from myself. ๐Ÿ™‚
    As mcreamsurfer said, sending strip from sequencer to nodes and back would be superb. At this moment I’m trying to put together about 100 short clips into one short movie, and I’d like to add some effects to some clips. I’d love to see an option to get strip (a sequence of strips which I’d like to add the effect/nodes to) or a single clip from sequencer as an Input node in Nodes (with Output node “Replace Clip/Strip”), or maybe an effect, which would link the clip/strip to sequencer.
    Anyway, I keep my fingers crossed for Durian โ€” the features to come are great!

  43. Mohamad Fadhil Says:

    i think ‘Adaptive Subdivision’ and micropolygon features (which was developed by KaiStock here: ) should be used for this movie.

    Video sample here:

    i think, you can create a very highly detailed mountains using this feature ๐Ÿ™‚