Amortized Bayesian Model Comparison With Evidential Deep Learning

Stefan T. Radev, Marco D'Alessandro, Ulf K. Mertens, Andreas Voss, Ullrich Kothe, Paul Christian Burkner

Research output: Contribution to journalArticleScientificpeer-review

11 Citations (Scopus)

Abstract

Comparing competing mathematical models of complex processes is a shared goal among many branches of science. The Bayesian probabilistic framework offers a principled way to perform model comparison and extract useful metrics for guiding decisions. However, many interesting models are intractable with standard Bayesian methods, as they lack a closed-form likelihood function or the likelihood is computationally too expensive to evaluate. In this work, we propose a novel method for performing Bayesian model comparison using specialized deep learning architectures. Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset. Moreover, it requires no hand-crafted summary statistics of the data and is designed to amortize the cost of simulation over multiple models, datasets, and dataset sizes. This makes the method especially effective in scenarios where model fit needs to be assessed for a large number of datasets, so that case-based inference is practically infeasible. Finally, we propose a novel way to measure epistemic uncertainty in model comparison problems. We demonstrate the utility of our method on toy examples and simulated data from nontrivial models from cognitive science and single-cell neuroscience. We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work. We argue that our framework can enhance and enrich model-based analysis and inference in many fields dealing with computational models of natural processes. We further argue that the proposed measure of epistemic uncertainty provides a unique proxy to quantify absolute evidence even in a framework which assumes that the true data-generating model is within a finite set of candidate models.

Original languageEnglish
Pages (from-to)4903-4917
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume34
Issue number8
Early online date11 Nov 2021
DOIs
Publication statusPublished - Aug 2023
MoE publication typeA1 Journal article-refereed

Keywords

  • Bayes methods
  • Bayesian inference
  • computational and artificial intelligence
  • Computational modeling
  • Data models
  • machine learning
  • Mathematical models
  • neural networks
  • Numerical models
  • Predictive models
  • statistical learning.
  • Uncertainty

Fingerprint

Dive into the research topics of 'Amortized Bayesian Model Comparison With Evidential Deep Learning'. Together they form a unique fingerprint.

Cite this