Are GANs overkill for NLP?

David Alvarez-Melis, Vikas Garg, Adam Tauman Kalai

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

Abstract

This work offers a novel theoretical perspective on why, despite numerous attempts, adversarial approaches to generative modeling (e.g., GANs) have not been as successful for certain generation tasks, particularly sequential tasks such as Natural Language Generation, as they have in others, such as Computer Vision. In particular, on sequential data such as text, maximum-likelihood approaches are significantly more utilized than GANs. We show that, while it may seem that maximizing likelihood is inherently different than minimizing distinguishability, this distinction is largely an artifact of the limited representational capacity of the model family, for a wide class of adversarial objectives. We give a theoretical model in which minimizing KL-divergence (i.e., maximizing likelihood) is a more efficient approach to effectively minimizing the same distinguishability criteria that adversarial models seek to optimize. Reductions show that minimizing distinguishability can be seen as simply boosting likelihood for certain families of models including n-gram models and neural networks with a softmax output layer. To achieve a full polynomial-time reduction, a novel next-token distinguishability model is considered. Some preliminary empirical evidence is also provided to substantiate our theoretical analyses.
Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 35 (NeurIPS 2022)
EditorsS. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh
PublisherMorgan Kaufmann Publishers
Number of pages13
ISBN (Print)978-1-7138-7108-8
Publication statusPublished - 2022
MoE publication typeA4 Conference publication
EventConference on Neural Information Processing Systems - New Orleans, United States
Duration: 28 Nov 20229 Dec 2022
Conference number: 36
https://nips.cc/

Publication series

NameAdvances in Neural Information Processing Systems
PublisherMorgan Kaufmann Publishers
Volume35
ISSN (Print)1049-5258

Conference

ConferenceConference on Neural Information Processing Systems
Abbreviated titleNeurIPS
Country/TerritoryUnited States
CityNew Orleans
Period28/11/202209/12/2022
Internet address

Fingerprint

Dive into the research topics of 'Are GANs overkill for NLP?'. Together they form a unique fingerprint.

Cite this