88 Downloads (Pure)

Abstract

Modern data-driven image generation models often surpass traditional graphics techniques in quality. However, while traditional modeling and animation tools allow precise control over the image generation process in terms of interpretable quantities - e.g., shapes and reflectances - endowing learned models with such controls is generally difficult. In the context of human faces, we seek a data-driven generator architecture that simultaneously retains the photorealistic quality of modern generative adversarial networks (GAN) and allows explicit, disentangled controls over head shapes, expressions, identity, background, and illumination. While our high-level goal is shared by a large body of previous work, we approach the problem with a different philosophy: We treat the problem as an unconditional synthesis task, and engineer interpretable inductive biases into the model that make it easy for the desired behavior to emerge. Concretely, our generator is a combination of learned neural networks and fixed-function blocks, such as a 3D morphable head model and texture-mapping rasterizer, and we leave it up to the training process to figure out how they should be used together. This greatly simplifies the training problem by removing the need for labeled training data; we learn the distributions of the independent variables that drive the model instead of requiring that their values are known for each training image. Furthermore, we need no contrastive or imitation learning for correct behavior. We show that our design successfully encourages the generative model to make use of the internal, interpretable representations in a semantically meaningful manner. This allows sampling of different aspects of the image independently, as well as precise control of the results by manipulating the internal state of the interpretable blocks within the generator. This enables, for instance, facial animation using traditional animation tools.

Original languageEnglish
Title of host publicationProceedings - SIGGRAPH 2023 Conference Papers
EditorsStephen N. Spencer
PublisherACM
Pages1-10
Number of pages10
ISBN (Electronic)979-8-4007-0159-7
DOIs
Publication statusPublished - 23 Jul 2023
MoE publication typeA4 Conference publication
EventACM International Conference and Exhibition on Computer Graphics Interactive Techniques - Los Angeles, United States
Duration: 6 Aug 202310 Aug 2023

Conference

ConferenceACM International Conference and Exhibition on Computer Graphics Interactive Techniques
Abbreviated titleACM SIGGRAPH
Country/TerritoryUnited States
CityLos Angeles
Period06/08/202310/08/2023

Keywords

  • differentiable rendering
  • face modeling
  • generative adversarial networks

Fingerprint

Dive into the research topics of 'A Hybrid Generator Architecture for Controllable Face Synthesis'. Together they form a unique fingerprint.

Cite this