Last month I gave an animation talk at Game Connect Asia Pacific (GCAP) down in Melbourne, titled First Pass, Final Pass : making animations rock on the indie clock.
The talk was recorded and thanks to GCAP organisers you can now watch it on Youtube, however I was asked if I could make available the material I used to discuss facial animation. As the examples were from Assault Android Cactus I figured expanding on it here would be appropriate.
In the talk I made two recommendations for indie facial animation – use as few bones/controls as possible, and weight generously so that each control affects as much of the face as possible.
To explain, here are Cactus’s facial bones :
This is effectively a 17 bone rig (17 facial bones influence skin weighting, although sometimes its the node bone, sometimes it’s the parent bone) and breaks down as two bones per eyebrow, two eyelid bones per eye, one bone to control eye direction, three upper mouth bones, three lower mouth bones and a jaw bone.
A more full featured facial rig would need many more controllers, and it’d be possible to get away with less but for me this was the minimum set of controls to create compelling expressions and not feel limited. Having bones at the upper and lower corners of the mouth, for instance, allows for a range of gritted teeth expressions that would not be possible with single corner bones.
Each was placed so the base of the bone gave a good rotational plane – by dragging the node, the bone would rotate in a way that the position would slide semi convincingly across the surface of the face. Each bone was also responsible for as much of the face as possible – moving the upper corners of the mouth would also stretch the nostrils, moving eyebrows would pull at the skin to the side of the eye in a way that coarsely approximates the way skin behaves – this way I get some motion without having to animate it specifically and generally avoids areas of the face going still.
This setup is similar to facial rigs I’ve used in the past and was a sweet point for me between being able to articulate the faces the way I wanted and ease of use, especially when producing many facial animations across the game, often under considerable time pressure.
For cutscenes or animations with significant head movement, I generally create a camera per speaking character and constrain it directly to the head bone, then set near and far clip planes to exclude everything but their face. This gives me simple and consistent access to the facial bones regardless of what is happening in the scene while still seeing expressions in context.
Finally, I finished discussing facial animations by showing a short extract from an Assault Android Cactus cutscene taken directly out of 3DSMAX with visible facial controls. While unlikely to steal any limelight from modern AAA productions, given all visual elements were created by a single person in a short timeframe, I feel it serves as a useful demonstration of the discussed techniques.