Training Generative Adversarial Networks with Limited Data

Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, Timo Aila

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

Abstract

Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42.
Original languageEnglish
Title of host publicationThirty-fourth Conference on Neural Information Processing Systems
PublisherMorgan Kaufmann Publishers
Number of pages11
Publication statusPublished - 2020
MoE publication typeA4 Conference publication
EventConference on Neural Information Processing Systems - Virtual, Vancouver, Canada
Duration: 6 Dec 202012 Dec 2020
Conference number: 34

Publication series

NameAdvances in neural information processing systems
PublisherMorgan Kaufmann Publishers
Volume33
ISSN (Print)1049-5258

Conference

ConferenceConference on Neural Information Processing Systems
Abbreviated titleNeurIPS
Country/TerritoryCanada
CityVancouver
Period06/12/202012/12/2020

Fingerprint

Dive into the research topics of 'Training Generative Adversarial Networks with Limited Data'. Together they form a unique fingerprint.

Cite this