Fixing Overconfidence in Dynamic Neural Networks

Lassi Meronen*, Martin Trapp, Andrea Pilzer, Le Yang, Arno Solin

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

5 Citations (Scopus)

Abstract

Dynamic neural networks are a recent technique that promises a remedy for the increasing size of modern deep learning models by dynamically adapting their computational cost to the difficulty of the inputs. In this way, the model can adjust to a limited computational budget. However, the poor quality of uncertainty estimates in deep learning models makes it difficult to distinguish between hard and easy samples. To address this challenge, we present a computationally efficient approach for post-hoc uncertainty quantification in dynamic neural networks. We show that adequately quantifying and accounting for both aleatoric and epistemic uncertainty through a probabilistic treatment of the last layers improves the predictive performance and aids decision-making when determining the computational budget. In the experiments, we show improvements on CIFAR100, ImageNet, and Caltech-256 in terms of accuracy, capturing uncertainty, and calibration error.

Original languageEnglish
Title of host publicationProceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024
PublisherIEEE
Pages2668-2678
Number of pages11
ISBN (Electronic)979-8-3503-1892-0
DOIs
Publication statusPublished - 3 Jan 2024
MoE publication typeA4 Conference publication
EventIEEE Winter Conference on Applications of Computer Vision - Waikoloa, United States
Duration: 4 Jan 20248 Jan 2024

Publication series

NameIEEE Winter Conference on Applications of Computer Vision
ISSN (Electronic)2642-9381

Conference

ConferenceIEEE Winter Conference on Applications of Computer Vision
Abbreviated titleWACV
Country/TerritoryUnited States
CityWaikoloa
Period04/01/202408/01/2024

Keywords

  • Algorithms
  • and algorithms
  • formulations
  • Machine learning architectures

Fingerprint

Dive into the research topics of 'Fixing Overconfidence in Dynamic Neural Networks'. Together they form a unique fingerprint.
  • Science-IT

    Hakala, M. (Manager)

    School of Science

    Facility/equipment: Facility

Cite this