Uncertainty-aware Sensitivity Analysis Using Rényi Divergences

Topi Paananen, Michael Andersen, Aki Vehtari

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

21 Downloads (Pure)

Abstract

For nonlinear supervised learning models, assessing the importance of predictor variables or their interactions is not straightforward because importance can vary in the domain of the variables. Importance can be assessed locally with sensitivity analysis using general methods that rely on the model's predictions or their derivatives. In this work, we extend derivative based sensitivity analysis to a Bayesian setting by differentiating the Rényi divergence of a model's predictive distribution. By utilising the predictive distribution instead of a point prediction, the model uncertainty is taken into account in a principled way. Our empirical results on simulated and real data sets demonstrate accurate and reliable identification of important variables and interaction effects compared to alternative methods.

Original languageEnglish
Title of host publicationProceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence
Pages1185-1194
Publication statusPublished - 12 Dec 2021
MoE publication typeA4 Article in a conference publication
EventConference on Uncertainty in Artificial Intelligence - Virtual, Online
Duration: 27 Jul 202129 Jul 2021
https://auai.org/uai2021/

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR
Volume161
ISSN (Electronic)2640-3498

Conference

ConferenceConference on Uncertainty in Artificial Intelligence
Abbreviated titleUAI
CityVirtual, Online
Period27/07/202129/07/2021
Internet address

Fingerprint

Dive into the research topics of 'Uncertainty-aware Sensitivity Analysis Using Rényi Divergences'. Together they form a unique fingerprint.

Cite this