Abstract
Image-to-image translation is a general name for a task where an image from one domain is converted to a corresponding image in another domain, given sufficient training data. Traditionally different approaches have been proposed depending on whether aligned image pairs or two sets of (unaligned) examples from both domains are available for training. While paired training samples might be difficult to obtain, the unpaired setup leads to a highly under-constrained problem and inferior results. In this paper, we propose a new general purpose image-to-image translation model that is able to utilize both paired and unpaired training data simultaneously. We compare our method with two strong baselines and obtain both qualitatively and quantitatively improved results. Our model outperforms the baselines also in the case of purely paired and unpaired training data. To our knowledge, this is the first work to consider such hybrid setup in image-to-image translation.
Original language | English |
---|---|
Title of host publication | Computer Vision - ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers |
Editors | Konrad Schindler, Greg Mori, C.V. Jawahar, Hongdong Li |
Publisher | Springer |
Pages | 51-66 |
Number of pages | 16 |
ISBN (Print) | 9783030208899 |
DOIs | |
Publication status | Published - 2019 |
MoE publication type | A4 Article in a conference publication |
Event | Asian Conference on Computer Vision - Perth, Australia Duration: 2 Dec 2018 → 6 Dec 2018 Conference number: 14 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Publisher | Springer Nature |
Volume | 11362 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | Asian Conference on Computer Vision |
---|---|
Abbreviated title | ACCV |
Country/Territory | Australia |
City | Perth |
Period | 02/12/2018 → 06/12/2018 |