Abstrakti
A salient approach to interpretable machine learning is to restrict modeling to simple models. In the Bayesian framework, this can be pursued by restricting the model structure and prior to favor interpretable models. Fundamentally, however, interpretability is about users’ preferences, not the data generation mechanism; it is more natural to formulate interpretability as a utility function. In this work, we propose an interpretability utility, which explicates the tradeoff between explanation fidelity and interpretability in the Bayesian framework. The method consists of two steps. First, a reference model, possibly a blackbox Bayesian predictive model which does not compromise accuracy, is fitted to the training data. Second, a proxy model from an interpretable model family that best mimics the predictive behaviour of the reference model is found by optimizing the interpretability utility function. The approach is model agnostic—neither the interpretable model nor the reference model are restricted to a certain class of models—and the optimization problem can be solved using standard tools. Through experiments on realword data sets, using decision trees as interpretable models and Bayesian additive regression models as reference models, we show that for the same level of interpretability, our approach generates more accurate models than the alternative of restricting the prior. We also propose a systematic way to measure stability of interpretabile models constructed by different interpretability approaches and show that our proposed approach generates more stable models.
Alkuperäiskieli  Englanti 

Sivut  18551876 
Sivumäärä  22 
Julkaisu  Machine Learning 
Vuosikerta  109 
Numero  910 
DOI  pysyväislinkit  
Tila  Julkaistu  1 syyskuuta 2020 
OKMjulkaisutyyppi  A1 Julkaistu artikkeli, soviteltu 
Sormenjälki Sukella tutkimusaiheisiin 'A decisiontheoretic approach for model interpretability in Bayesian framework'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.
Laitteet
Lehtileikkeet

Methods for probabilistic modeling of knowledge elicitation for improving machine learning predictions
04/12/2020
1 kohde/ Medianäkyvyys
Lehdistö/media: Esiintyminen mediassa