Comparison of Contextual Importance and Utility with LIME and Shapley Values

Kary Främling*, Marcus Westberg, Martin Jullum, Manik Madhikermi, Avleen Malhi

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Different explainable AI (XAI) methods are based on different notions of ‘ground truth’. In order to trust explanations of AI systems, the ground truth has to provide fidelity towards the actual behaviour of the AI system. An explanation that has poor fidelity towards the AI system’s actual behaviour can not be trusted no matter how convincing the explanations appear to be for the users. The Contextual Importance and Utility (CIU) method differs from currently popular outcome explanation methods such as Local Interpretable Model-agnostic Explanations (LIME) and Shapley values in several ways. Notably, CIU does not build any intermediate interpretable model like LIME, and it does not make any assumption regarding linearity or additivity of the feature importance. CIU also introduces the value utility notion and a definition of feature importance that is different from LIME and Shapley values. We argue that LIME and Shapley values actually estimate ‘influence’ (rather than ‘importance’), which combines importance and utility. The paper compares the three methods in terms of validity of their ground truth assumption and fidelity towards the underlying model through a series of benchmark tasks. The results confirm that LIME results tend not to be coherent nor stable. CIU and Shapley values give rather similar results when limiting explanations to ‘influence’. However, by separating ‘importance’ and ‘utility’ elements, CIU can provide more expressive and flexible explanations than LIME and Shapley values.

Original languageEnglish
Title of host publicationExplainable and Transparent AI and Multi-Agent Systems - 3rd International Workshop, EXTRAAMAS 2021, Revised Selected Papers
EditorsDavide Calvaresi, Amro Najjar, Michael Winikoff, Kary Främling
PublisherSpringer Science and Business Media Deutschland GmbH
Pages39-54
Number of pages16
ISBN (Print)9783030820169
DOIs
Publication statusPublished - 2021
MoE publication typeA4 Article in a conference publication
EventInternational Workshop on Explainable, Transparent AI and Multi-Agent Systems - Virtual, Online
Duration: 3 May 20217 May 2021
Conference number: 3

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer
Volume12688 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Workshop

WorkshopInternational Workshop on Explainable, Transparent AI and Multi-Agent Systems
Abbreviated titleEXTRAAMAS
CityVirtual, Online
Period03/05/202107/05/2021

Keywords

  • Contextual Importance and Utility
  • Explainable AI
  • Outcome explanation
  • Post hoc explanation

Fingerprint

Dive into the research topics of 'Comparison of Contextual Importance and Utility with LIME and Shapley Values'. Together they form a unique fingerprint.

Cite this