Facial Test
on December 13th, 2009, by nathanJust a quick post. I’ve been messing around a bit with facial rigging.
The tricky bit is that it needs to fit within the framework of the autorigging system, and needs to be fast to set up for background characters without much manual work, and also needs to be flexible enough to let us get really top-notch results for main characters.
After discussions with Angela and some tests, I’ve settled on a hybrid approach between bones and shape keys. The idea is that even with only bones, the face rig should work well enough for background characters. But it will be supplemented with shape keys for main characters. Things like lip roll, nostril flare, squinting around the eyes, etc. will be done with shape keys. Corrective shape keys will also be used to preserve volume and define creases better.
This is a test of the bone-only setup for the mouth area:
It’s not perfect, and the facial expressions don’t necessarily match her character, but it’s a good proof-of-concept, I think.
–Nathan
December 13th, 2009 at 4:03 pm
nice to see the progress. cant wait to dig into the files. you guys work well as a team.
I’m really studying your approach with the hope of applying it to my own lil project.
December 13th, 2009 at 4:04 pm
Yeah, not bad at all. Nice deformations too.
December 13th, 2009 at 4:04 pm
Very nice!
December 13th, 2009 at 4:14 pm
The smile was most impressive, and the frown was the least natural-looking. 🙂 Great job so far; I’d like to see more of the autorigger when it comes time.
December 13th, 2009 at 4:43 pm
Nice to see her coming alive!
Can’t wait to see the soul of the eyes 😉
December 13th, 2009 at 4:35 pm
Great!
Thanks for the ogg version !
December 13th, 2009 at 5:46 pm
I think this could be more organic if the top part of the head had more movement because right now it looks a little like a robot.
As an exercise try doing the biggest smile you have ever done. You should notice that your whole head moves and not just the lower part. I think that small movements would make the whole thing look more natural.
December 13th, 2009 at 6:06 pm
andrew, its a “test of the bone-only setup for the mouth area”
December 13th, 2009 at 6:08 pm
Nice job Nathan. If this is an only bones rig without any shapekeys then big congratulation. And I am even more impressed that this should be included in Blender 2.5 autorigging system! Blender with FaceRobot and BipedRig features 🙂
If main character has this combined with shapekeys then this will be very usefull work-flow for real-studio production.
If you would like to combine this with single bones influence for detail level control check my PM at BlenderArtists.org.
Good luck
JiriH
December 13th, 2009 at 6:46 pm
cool! 😀 so, shape keys and bones… so, it seems i’ll need stuff like that 😀
umm, aren’t *.ogv sound files? (in Audacity there’s a format called Ogg Vorbis, so…) do you use Blender to export videos into that format?
can’t wait to see the final rig/setup :-DDD
GO DURIAN/SINTEL TEAM!!! (is that getting old? I’ve said it a lot…lol)
December 13th, 2009 at 6:58 pm
good! 🙂
December 13th, 2009 at 7:26 pm
@Andrew Fenn:
As tinonetic pointed out, this is just a test of the mouth area in isolation.
@JiriH:
Just to nit-pick: there isn’t a Blender 2.5 auto-rigging system. Rather, there is separate set of Python scripts being written for doing auto rigging for Durian. It is not intended to be a built-in feature. But, of course, it is being made with the intent of being useful to people after Durian as well.
@MeshWeaver:
OGV is for “ogg video”. It’s the default file extension that ffmpeg2theora outputs with. I just stick with that.
Technically, I could just use *.ogg instead. But as you pointed out, that’s most associated with ogg audio.
December 13th, 2009 at 8:09 pm
Nice work Nathan.
I can do only in dream.
Can you make at-least one tutorials on your work? Plzzzzz! Plz Plz…
I’m waiting. 🙂
-Aman b
December 13th, 2009 at 8:57 pm
Python plugin or build in feature really does not matter when talking about 2.5. I guess python plugin for such kind of thing is much better then build in feature.
December 13th, 2009 at 9:24 pm
Well, the difference is whether it’s an officially maintained part of Blender.
There are many ways one could build an autorigging system, and they all suit different use-cases. I don’t think it’s really appropriate to have ours be an officially maintained part of Blender. Especially since we still have yet to see how it holds up over the course of our production (I’m sure we’ll find a lot of mistakes we made, and discover ways that we could have done things better).
December 13th, 2009 at 11:08 pm
“… we still have yet to see how it holds up over the course of our production (I’m sure we’ll find a lot of mistakes we made, and discover ways that we could have done things better).”
What do they say, “the second piece is always better” (because everybody learns from their mistakes).
It would be very very helpful, if the team members could publicly document all the things that they would do different, and why. Although it’s already an outstanding opportunity to learn from the blend files etc., it’s difficult to see the tradeoffs from the outside as well as the experience from actually putting up with these tradeoffs through to the final product.
I’m pretty sure continued development/refinement of the released python scripts & utilities would get a boost if there’s a review (20/20 hindsight report) based on production experience.
Just my thoughts. And many, many thanks to all the people involved with Durian, but also with Blender development in general. Progress is absolutely amazing on the creative as well as on the code front.
December 14th, 2009 at 12:55 am
It’s great to hear you’ve decided on a bones / shape-key hybrid approach. That will mean that your animators will be able to achieve exactly the result they want without endless tweaking to the rig.
December 14th, 2009 at 1:48 am
Fantastic work so far, even for a test everything is looking very nice.
What i appreciate now most is how you aren’t pussyfooting with issues which were basicly ways of getting around limitations of blender. (remember the issue of moving grass on BBB ;))
Now you’re getting tools to do the job which are as good as any other program and your getting promising results very fast.
December 14th, 2009 at 3:11 am
This looks promising and I am very curious about rigify 🙂
December 14th, 2009 at 2:57 pm
In max we used to create a very simple mesh from the character and then create the shape keys for that mesh. Then this mesh drive the characters using shrinkwrap. So you have one mesh and with shrinkwrap you can drive every character you want (after some correction in proportions if needed). But I didn’t tried yet in Blender if this will work or not. Sorry if not.
December 14th, 2009 at 4:08 pm
Maybe it’s me, but what I find so cool is that all of this is being done with FREE software! We are all being moved in someway by this open source experience. There’s like this kind of organic movement that’s just asking the individual to dream! Dude this rocks!
December 14th, 2009 at 6:29 pm
I was wrong, it is not shrinkwrap but meshdeform what I explained above. In max it is called shrinkwrap but in blender it is meshdeform.
December 15th, 2009 at 12:20 am
Wow, very cool work. Even for a basic test, that looks great!
December 15th, 2009 at 10:44 pm
Hey Nathan, can u warn a brutha when you post a super long clip? thanks. 😉
December 17th, 2009 at 6:29 pm
hi,
Amazing work!!!
I found this paper about a model of the human iris for realistic rendering, can you guys have a look and maybe use it for durian.
Sample video: http://vimeo.com/3545229
http://vitorpamplona.com/deps/papers/2009_TOG_IRIS.pdf
thanks.