Abstrakti
Large capacity deep learning models are often prone to a high generalization gap when trained with a limited amount of labeled training data. A recent class of methods to address this problem uses various ways to construct a new training sample by mixing a pair (or more) of training samples. We propose PatchUp, a hidden state block-level regularization technique for Convolutional Neural Networks (CNNs), that is applied on selected contiguous blocks of feature maps from a random pair of samples. Our approach improves the robustness of CNN models against the manifold intrusion problem that may occur in other state-of-the-art mixing approaches. Moreover, since we are mixing the contiguous block of features in the hidden space, which has more dimensions than the input space, we obtain more diverse samples for training towards different dimensions. Our experiments on CIFAR10/100, SVHN, Tiny-ImageNet, and ImageNet using ResNet architectures including PreActResnet18/34, WRN-28-10, ResNet101/152 models show that PatchUp improves upon, or equals, the performance of current state-of-the-art regularizers for CNNs. We also show that PatchUp can provide a better generalization to deformed samples and is more robust against adversarial attacks.
Alkuperäiskieli | Englanti |
---|---|
Otsikko | AAAI-22 Technical Tracks 1 |
Kustantaja | AAAI Press |
Sivut | 589-597 |
Sivumäärä | 9 |
ISBN (elektroninen) | 978-1-57735-876-3 |
Tila | Julkaistu - 30 kesäk. 2022 |
OKM-julkaisutyyppi | A4 Artikkeli konferenssijulkaisussa |
Tapahtuma | AAAI Conference on Artificial Intelligence - virtual conference, Virtual, Online Kesto: 22 helmik. 2022 → 1 maalisk. 2022 Konferenssinumero: 36 https://aaai.org/Conferences/AAAI-22/ |
Julkaisusarja
Nimi | Proceedings of the AAAI Conference on Artificial Intelligence |
---|---|
Vuosikerta | 36 |
Conference
Conference | AAAI Conference on Artificial Intelligence |
---|---|
Lyhennettä | AAAI |
Kaupunki | Virtual, Online |
Ajanjakso | 22/02/2022 → 01/03/2022 |
www-osoite |