Adversarial mixup resynthesizers

Christopher Beckham, Sina Honari, Vikas Verma, Alex Lamb, Farnoosh Ghadiri, R. Devon Hjelm, Christopher Pal

Research output: Contribution to conferencePaperScientificpeer-review

Abstract

In this paper, we explore new approaches to combining information encoded within the learned representations of autoencoders. We explore models that are capable of combining the attributes of multiple inputs such that a resynthesised output is trained to fool an adversarial discriminator for real versus synthesised data. Furthermore, we explore the use of such an architecture in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states, or masked combinations of latent representations that are consistent with a conditioned class label. We show quantitative and qualitative evidence that such a formulation is an interesting avenue of research.

Original languageEnglish
Publication statusPublished - 1 Jan 2019
MoE publication typeNot Eligible
EventDeep Generative Models for Highly Structured Data - New Orleans, United States
Duration: 6 May 20196 May 2019

Workshop

WorkshopDeep Generative Models for Highly Structured Data
Abbreviated titleDGS@ICLR Workshop
Country/TerritoryUnited States
CityNew Orleans
Period06/05/201906/05/2019
OtherDGS@ICLR Workshop

Fingerprint

Dive into the research topics of 'Adversarial mixup resynthesizers'. Together they form a unique fingerprint.

Cite this