Sintel, the Durian Open Movie Project » Blog Archive » Facial Test


Facial Test

on December 13th, 2009, by nathan

Just a quick post. I’ve been messing around a bit with facial rigging.

The tricky bit is that it needs to fit within the framework of the autorigging system, and needs to be fast to set up for background characters without much manual work, and also needs to be flexible enough to let us get really top-notch results for main characters.

After discussions with Angela and some tests, I’ve settled on a hybrid approach between bones and shape keys. The idea is that even with only bones, the face rig should work well enough for background characters. But it will be supplemented with shape keys for main characters. Things like lip roll, nostril flare, squinting around the eyes, etc. will be done with shape keys. Corrective shape keys will also be used to preserve volume and define creases better.

This is a test of the bone-only setup for the mouth area:


It’s not perfect, and the facial expressions don’t necessarily match her character, but it’s a good proof-of-concept, I think.


25 Responses to “Facial Test”

  1. tinonetic Says:

    nice to see the progress. cant wait to dig into the files. you guys work well as a team.

    I’m really studying your approach with the hope of applying it to my own lil project.

  2. JoOngle Says:

    Yeah, not bad at all. Nice deformations too.

  3. Gu4r@ Says:

    Very nice!

  4. D Says:

    The smile was most impressive, and the frown was the least natural-looking. 🙂 Great job so far; I’d like to see more of the autorigger when it comes time.

  5. Linkeltje Says:

    Nice to see her coming alive!
    Can’t wait to see the soul of the eyes 😉

  6. PaulK Says:


    Thanks for the ogg version !

  7. Andrew Fenn Says:

    I think this could be more organic if the top part of the head had more movement because right now it looks a little like a robot.

    As an exercise try doing the biggest smile you have ever done. You should notice that your whole head moves and not just the lower part. I think that small movements would make the whole thing look more natural.

  8. tinonetic Says:

    andrew, its a “test of the bone-only setup for the mouth area”

  9. JiriH Says:

    Nice job Nathan. If this is an only bones rig without any shapekeys then big congratulation. And I am even more impressed that this should be included in Blender 2.5 autorigging system! Blender with FaceRobot and BipedRig features 🙂

    If main character has this combined with shapekeys then this will be very usefull work-flow for real-studio production.

    If you would like to combine this with single bones influence for detail level control check my PM at

    Good luck

  10. MeshWeaver Says:

    cool! 😀 so, shape keys and bones… so, it seems i’ll need stuff like that 😀

    umm, aren’t *.ogv sound files? (in Audacity there’s a format called Ogg Vorbis, so…) do you use Blender to export videos into that format?

    can’t wait to see the final rig/setup :-DDD

    GO DURIAN/SINTEL TEAM!!! (is that getting old? I’ve said it a lot…lol)

  11. MmAaXx Says:

    good! 🙂

  12. nathan Says:

    @Andrew Fenn:
    As tinonetic pointed out, this is just a test of the mouth area in isolation.

    Just to nit-pick: there isn’t a Blender 2.5 auto-rigging system. Rather, there is separate set of Python scripts being written for doing auto rigging for Durian. It is not intended to be a built-in feature. But, of course, it is being made with the intent of being useful to people after Durian as well.

    OGV is for “ogg video”. It’s the default file extension that ffmpeg2theora outputs with. I just stick with that.
    Technically, I could just use *.ogg instead. But as you pointed out, that’s most associated with ogg audio.

  13. Aman b Says:

    Nice work Nathan.

    I can do only in dream.

    Can you make at-least one tutorials on your work? Plzzzzz! Plz Plz…

    I’m waiting. 🙂

    -Aman b

  14. JiriH Says:

    Python plugin or build in feature really does not matter when talking about 2.5. I guess python plugin for such kind of thing is much better then build in feature.

  15. nathan Says:

    Well, the difference is whether it’s an officially maintained part of Blender.

    There are many ways one could build an autorigging system, and they all suit different use-cases. I don’t think it’s really appropriate to have ours be an officially maintained part of Blender. Especially since we still have yet to see how it holds up over the course of our production (I’m sure we’ll find a lot of mistakes we made, and discover ways that we could have done things better).

  16. dmos Says:

    “… we still have yet to see how it holds up over the course of our production (I’m sure we’ll find a lot of mistakes we made, and discover ways that we could have done things better).”

    What do they say, “the second piece is always better” (because everybody learns from their mistakes).

    It would be very very helpful, if the team members could publicly document all the things that they would do different, and why. Although it’s already an outstanding opportunity to learn from the blend files etc., it’s difficult to see the tradeoffs from the outside as well as the experience from actually putting up with these tradeoffs through to the final product.

    I’m pretty sure continued development/refinement of the released python scripts & utilities would get a boost if there’s a review (20/20 hindsight report) based on production experience.

    Just my thoughts. And many, many thanks to all the people involved with Durian, but also with Blender development in general. Progress is absolutely amazing on the creative as well as on the code front.

  17. Rudiger Says:

    It’s great to hear you’ve decided on a bones / shape-key hybrid approach. That will mean that your animators will be able to achieve exactly the result they want without endless tweaking to the rig.

  18. TheANIMAL (marcus) Says:

    Fantastic work so far, even for a test everything is looking very nice.

    What i appreciate now most is how you aren’t pussyfooting with issues which were basicly ways of getting around limitations of blender. (remember the issue of moving grass on BBB ;))

    Now you’re getting tools to do the job which are as good as any other program and your getting promising results very fast.

  19. yoff Says:

    This looks promising and I am very curious about rigify 🙂

  20. Bao2 Says:

    In max we used to create a very simple mesh from the character and then create the shape keys for that mesh. Then this mesh drive the characters using shrinkwrap. So you have one mesh and with shrinkwrap you can drive every character you want (after some correction in proportions if needed). But I didn’t tried yet in Blender if this will work or not. Sorry if not.

  21. Blender for president Says:

    Maybe it’s me, but what I find so cool is that all of this is being done with FREE software! We are all being moved in someway by this open source experience. There’s like this kind of organic movement that’s just asking the individual to dream! Dude this rocks!

  22. Bao2 Says:

    I was wrong, it is not shrinkwrap but meshdeform what I explained above. In max it is called shrinkwrap but in blender it is meshdeform.

  23. Andrei Thorp Says:

    Wow, very cool work. Even for a basic test, that looks great!

  24. Extensor Says:

    Hey Nathan, can u warn a brutha when you post a super long clip? thanks. 😉

  25. bruno Says:

    Amazing work!!!

    I found this paper about a model of the human iris for realistic rendering, can you guys have a look and maybe use it for durian.

    Sample video: