GANSpace: Discovering Interpretable GAN controls

Erik Härkönen, Aaron Hertzman, Jaakko Lehtinen, Sylvain Paris

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

This paper describes a simple technique to analyze Generative Adversarial Networks (GANs) and create interpretable controls for image synthesis, such as change of viewpoint, aging, lighting, and time of day. We identify important latent directions based on Principal Component Analysis (PCA) applied either in latent space or feature space. Then, we show that a large number of interpretable controls can be defined by layer-wise perturbation along the principal directions. Moreover, we show that BigGAN can be controlled with layer-wise inputs in a StyleGAN-like manner. We show results on different GANs trained on various datasets, and demonstrate good qualitative matches to edit directions found through earlier supervised approaches.
Original languageEnglish
Title of host publicationThirty-fourth Conference on Neural Information Processing Systems
Number of pages10
Publication statusPublished - 2020
MoE publication typeA4 Article in a conference publication
EventIEEE Conference on Neural Information Processing Systems; - Virtual, Vancouver, Canada
Duration: 6 Dec 202012 Dec 2020
Conference number: 34

Publication series

NameAdvances in neural information processing systems
PublisherMorgan Kaufmann Publishers
Volume33
ISSN (Print)1049-5258

Conference

ConferenceIEEE Conference on Neural Information Processing Systems;
Abbreviated titleNeurIPS
CountryCanada
CityVancouver
Period06/12/202012/12/2020

Fingerprint Dive into the research topics of 'GANSpace: Discovering Interpretable GAN controls'. Together they form a unique fingerprint.

Cite this