Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion

Research output: Contribution to journalArticleScientificpeer-review

Researchers

Research units

  • Nvidia
  • Remedy Entertainment

Abstract

We present a machine learning technique for driving 3D facial animation by audio input in real time and with low latency. Our deep neural network learns a mapping from input waveforms to the 3D vertex coordinates of a face model, and simultaneously discovers a compact, latent code that disambiguates the variations in facial expression that cannot be explained by the audio alone. During inference, the latent code can be used as an intuitive control for the emotional state of the face puppet.

We train our network with 3--5 minutes of high-quality animation data obtained using traditional, vision-based performance capture methods. Even though our primary goal is to model the speaking style of a single actor, our model yields reasonable results even when driven with audio from other speakers with different gender, accent, or language, as we demonstrate with a user study. The results are applicable to in-game dialogue, low-cost localization, virtual reality avatars, and telepresence.

Details

Original languageEnglish
Article number94
Pages (from-to)1-12
JournalACM Transactions on Graphics
Volume36
Issue number4
Publication statusPublished - Jul 2017
MoE publication typeA1 Journal article-refereed

ID: 16800028