Training Generative Adversarial Networks with Limited Data

Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, Timo Aila

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussaConference contributionScientificvertaisarvioitu

Abstrakti

Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42.
AlkuperäiskieliEnglanti
OtsikkoThirty-fourth Conference on Neural Information Processing Systems
Sivumäärä11
TilaJulkaistu - 2020
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisuussa
TapahtumaIEEE Conference on Neural Information Processing Systems; - Virtual, Vancouver, Kanada
Kesto: 6 joulukuuta 202012 joulukuuta 2020
Konferenssinumero: 34

Julkaisusarja

NimiAdvances in neural information processing systems
KustantajaMorgan Kaufmann Publishers
Vuosikerta33
ISSN (painettu)1049-5258

Conference

ConferenceIEEE Conference on Neural Information Processing Systems;
LyhennettäNeurIPS
MaaKanada
KaupunkiVancouver
Ajanjakso06/12/202012/12/2020

Sormenjälki Sukella tutkimusaiheisiin 'Training Generative Adversarial Networks with Limited Data'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä