Exploring Contextual Importance and Utility in Explaining Affect Detection

Nazanin Fouladgar*, Marjan Alirezaie, Kary Främling

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

3 Citations (Scopus)


By the ubiquitous usage of machine learning models with their inherent black-box nature, the necessity of explaining the decisions made by these models has become crucial. Although outcome explanation has been recently taken into account as a solution to the transparency issue in many areas, affect computing is one of the domains with the least dedicated effort on the practice of explainable AI, particularly over different machine learning models. The aim of this work is to evaluate the outcome explanations of two black-box models, namely neural network (NN) and linear discriminant analysis (LDA), to understand individuals affective states measured by wearable sensors. Emphasizing on context-aware decision explanations of these models, the two concepts of Contextual Importance (CI) and Contextual Utility (CU) are employed as a model-agnostic outcome explanation approach. We conduct our experiments on the two multimodal affect computing datasets, namely WESAD and MAHNOB-HCI. The results of applying a neural-based model on the first dataset reveal that the electrodermal activity, respiration as well as accelorometer sensors contribute significantly in the detection of “meditation” state for a particular participant. However, the respiration sensor does not intervene in the LDA decision of the same state. On the course of second dataset and the neural network model, the importance and utility of electrocardiogram and respiration sensors are shown as the dominant features in the detection of an individual “surprised” state, while the LDA model does not rely on the respiration sensor to detect this mental state.

Original languageEnglish
Title of host publicationAIxIA 2020 – Advances in Artificial Intelligence - XIXth International Conference of the Italian Association for Artificial Intelligence, Revised Selected Papers
EditorsMatteo Baldoni, Stefania Bandini
Number of pages16
ISBN (Print)9783030770907
Publication statusPublished - 2021
MoE publication typeA4 Conference publication
EventInternational Conference of the Italian Association for Artificial Intelligence - Virtual, Online, Italy
Duration: 24 Nov 202027 Nov 2020
Conference number: 19

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12414 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


ConferenceInternational Conference of the Italian Association for Artificial Intelligence
Abbreviated titleAIxIA
CityVirtual, Online


  • Affect detection
  • Black-Box decision
  • Contextual importance and utility
  • Explainable AI


Dive into the research topics of 'Exploring Contextual Importance and Utility in Explaining Affect Detection'. Together they form a unique fingerprint.

Cite this