Projects per year
Abstract
Modern data-driven image generation models often surpass traditional graphics techniques in quality. However, while traditional modeling and animation tools allow precise control over the image generation process in terms of interpretable quantities - e.g., shapes and reflectances - endowing learned models with such controls is generally difficult. In the context of human faces, we seek a data-driven generator architecture that simultaneously retains the photorealistic quality of modern generative adversarial networks (GAN) and allows explicit, disentangled controls over head shapes, expressions, identity, background, and illumination. While our high-level goal is shared by a large body of previous work, we approach the problem with a different philosophy: We treat the problem as an unconditional synthesis task, and engineer interpretable inductive biases into the model that make it easy for the desired behavior to emerge. Concretely, our generator is a combination of learned neural networks and fixed-function blocks, such as a 3D morphable head model and texture-mapping rasterizer, and we leave it up to the training process to figure out how they should be used together. This greatly simplifies the training problem by removing the need for labeled training data; we learn the distributions of the independent variables that drive the model instead of requiring that their values are known for each training image. Furthermore, we need no contrastive or imitation learning for correct behavior. We show that our design successfully encourages the generative model to make use of the internal, interpretable representations in a semantically meaningful manner. This allows sampling of different aspects of the image independently, as well as precise control of the results by manipulating the internal state of the interpretable blocks within the generator. This enables, for instance, facial animation using traditional animation tools.
Original language | English |
---|---|
Title of host publication | Proceedings - SIGGRAPH 2023 Conference Papers |
Editors | Stephen N. Spencer |
Publisher | ACM |
Pages | 1-10 |
Number of pages | 10 |
ISBN (Electronic) | 979-8-4007-0159-7 |
DOIs | |
Publication status | Published - 23 Jul 2023 |
MoE publication type | A4 Conference publication |
Event | ACM International Conference and Exhibition on Computer Graphics Interactive Techniques - Los Angeles, United States Duration: 6 Aug 2023 → 10 Aug 2023 |
Conference
Conference | ACM International Conference and Exhibition on Computer Graphics Interactive Techniques |
---|---|
Abbreviated title | ACM SIGGRAPH |
Country/Territory | United States |
City | Los Angeles |
Period | 06/08/2023 → 10/08/2023 |
Keywords
- differentiable rendering
- face modeling
- generative adversarial networks
Fingerprint
Dive into the research topics of 'A Hybrid Generator Architecture for Controllable Face Synthesis'. Together they form a unique fingerprint.Projects
- 1 Active
-
PIPE: Learning PixelPerfect 3D Vision and Generative Modeling
Lehtinen, J. (Principal investigator), Melekhov, I. (Project Member), Härkönen, E. (Project Member), Kemppinen, P. (Project Member), Timonen, H. (Project Member), Kozlukov, S. (Project Member) & Kynkäänniemi, T. (Project Member)
01/05/2020 → 31/08/2025
Project: EU: ERC grants