SketchTransfer: A New Dataset for Exploring Detail-Invariance and the Abstractions Learned by Deep Networks

Alex Lamb, Sherzil Ozair, Vikas Verma, David Ha

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Deep networks have achieved excellent results in perceptual tasks, yet their ability to generalize to variations not seen during training has come under increasing scrutiny. In this work we focus on their ability to have invariance towards the presence or absence of details. For example, humans are able to watch cartoons, which are missing many visual details, without being explicitly trained to do so. As another example, 3D rendering software is a relatively recent development, yet people are able to understand such rendered scenes even though they are missing details (consider a film like Toy Story). The failure of machine learning algorithms to do this indicates a significant gap in generalization between human abilities and the abilities of deep networks. We propose a dataset that will make it easier to study the detail-invariance problem concretely. We produce a concrete task for this: SketchTransfer, and we show that state-of-the-art domain transfer algorithms still struggle with this task. The state-of-the-art technique which achieves over 95\% on MNIST $\xrightarrow {} $ SVHN transfer only achieves 59\% accuracy on the SketchTransfer task, which is much better than random (11\% accuracy) but falls short of the 87\% accuracy of a classifier trained directly on labeled sketches. This indicates that this task is approachable with today's best methods but has substantial room for improvement.
Original languageEnglish
Title of host publication2020 Winter Conference on Applications of Computer Vision (WACV ’20)
Publication statusAccepted/In press - 2020
MoE publication typeA4 Article in a conference publication

Fingerprint Dive into the research topics of 'SketchTransfer: A New Dataset for Exploring Detail-Invariance and the Abstractions Learned by Deep Networks'. Together they form a unique fingerprint.

  • Cite this

    Lamb, A., Ozair, S., Verma, V., & Ha, D. (Accepted/In press). SketchTransfer: A New Dataset for Exploring Detail-Invariance and the Abstractions Learned by Deep Networks. In 2020 Winter Conference on Applications of Computer Vision (WACV ’20)