Abstract
Large capacity deep learning models are often prone to a high generalization gap when trained with a limited amount of labeled training data. A recent class of methods to address this problem uses various ways to construct a new training sample by mixing a pair (or more) of training samples. We propose PatchUp, a hidden state block-level regularization technique for Convolutional Neural Networks (CNNs), that is applied on selected contiguous blocks of feature maps from a random pair of samples. Our approach improves the robustness of CNN models against the manifold intrusion problem that may occur in other state-of-the-art mixing approaches. Moreover, since we are mixing the contiguous block of features in the hidden space, which has more dimensions than the input space, we obtain more diverse samples for training towards different dimensions. Our experiments on CIFAR10/100, SVHN, Tiny-ImageNet, and ImageNet using ResNet architectures including PreActResnet18/34, WRN-28-10, ResNet101/152 models show that PatchUp improves upon, or equals, the performance of current state-of-the-art regularizers for CNNs. We also show that PatchUp can provide a better generalization to deformed samples and is more robust against adversarial attacks.
Original language | English |
---|---|
Title of host publication | AAAI-22 Technical Tracks 1 |
Publisher | AAAI Press |
Pages | 589-597 |
Number of pages | 9 |
ISBN (Electronic) | 978-1-57735-876-3 |
Publication status | Published - 30 Jun 2022 |
MoE publication type | A4 Conference publication |
Event | AAAI Conference on Artificial Intelligence - virtual conference, Virtual, Online Duration: 22 Feb 2022 → 1 Mar 2022 Conference number: 36 https://aaai.org/Conferences/AAAI-22/ |
Publication series
Name | Proceedings of the AAAI Conference on Artificial Intelligence |
---|---|
Volume | 36 |
Conference
Conference | AAAI Conference on Artificial Intelligence |
---|---|
Abbreviated title | AAAI |
City | Virtual, Online |
Period | 22/02/2022 → 01/03/2022 |
Internet address |