Context-specific sampling method for contextual explanations

Research output: Contribution to conferencePaperScientificpeer-review


Explaining the result of machine learning models is an active
research topic in Artificial Intelligence (AI) domain with an objective to
provide mechanisms to understand and interpret the results of the underlying black-box model in a human-understandable form. With this objective, several eXplainable Artificial Intelligence (XAI) methods have been
designed and developed based on varied fundamental principles. Some
methods such as Local interpretable model agnostic explanations (LIME),
SHAP (SHapley Additive exPlanations) are based on the surrogate model
while others such as Contextual Importance and Utility (CIU) do not create or rely on the surrogate model to generate its explanation. Despite the
difference in underlying principles, these methods use different sampling
techniques such as uniform sampling, weighted sampling for generating explanations. CIU, which emphasizes a context-aware decision explanation,
employs a uniform sampling method for the generation of representative
samples. In this research, we target uniform sampling methods which
generate representative samples that do not guarantee to be representative in the presence of strong non-linearities or exceptional input feature
value combinations. The objective of this research is to develop a sampling method that addresses these concerns. To address this need, a new
adaptive weighted sampling method has been proposed. In order to verify its efficacy in generating explanations, the proposed method has been
integrated with CIU, and tested by deploying the special test case
Original languageEnglish
Number of pages6
Publication statusAccepted/In press - 6 Sep 2021
MoE publication typeNot Eligible


  • CIU
  • XAI
  • weighted adaptive sampling
  • , black-box explanations


Dive into the research topics of 'Context-specific sampling method for contextual explanations'. Together they form a unique fingerprint.

Cite this