Abstract
Explaining the result of machine learning models is an active
research topic in Artificial Intelligence (AI) domain with an objective to
provide mechanisms to understand and interpret the results of the underlying black-box model in a human-understandable form. With this objective, several eXplainable Artificial Intelligence (XAI) methods have been
designed and developed based on varied fundamental principles. Some
methods such as Local interpretable model agnostic explanations (LIME),
SHAP (SHapley Additive exPlanations) are based on the surrogate model
while others such as Contextual Importance and Utility (CIU) do not create or rely on the surrogate model to generate its explanation. Despite the
difference in underlying principles, these methods use different sampling
techniques such as uniform sampling, weighted sampling for generating explanations. CIU, which emphasizes a context-aware decision explanation,
employs a uniform sampling method for the generation of representative
samples. In this research, we target uniform sampling methods which
generate representative samples that do not guarantee to be representative in the presence of strong non-linearities or exceptional input feature
value combinations. The objective of this research is to develop a sampling method that addresses these concerns. To address this need, a new
adaptive weighted sampling method has been proposed. In order to verify its efficacy in generating explanations, the proposed method has been
integrated with CIU, and tested by deploying the special test case
research topic in Artificial Intelligence (AI) domain with an objective to
provide mechanisms to understand and interpret the results of the underlying black-box model in a human-understandable form. With this objective, several eXplainable Artificial Intelligence (XAI) methods have been
designed and developed based on varied fundamental principles. Some
methods such as Local interpretable model agnostic explanations (LIME),
SHAP (SHapley Additive exPlanations) are based on the surrogate model
while others such as Contextual Importance and Utility (CIU) do not create or rely on the surrogate model to generate its explanation. Despite the
difference in underlying principles, these methods use different sampling
techniques such as uniform sampling, weighted sampling for generating explanations. CIU, which emphasizes a context-aware decision explanation,
employs a uniform sampling method for the generation of representative
samples. In this research, we target uniform sampling methods which
generate representative samples that do not guarantee to be representative in the presence of strong non-linearities or exceptional input feature
value combinations. The objective of this research is to develop a sampling method that addresses these concerns. To address this need, a new
adaptive weighted sampling method has been proposed. In order to verify its efficacy in generating explanations, the proposed method has been
integrated with CIU, and tested by deploying the special test case
Original language | English |
---|---|
Title of host publication | ESANN 2021 Proceedings - 29th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning |
Publisher | European Symposium on Artificial Neural Networks (ESANN) |
Pages | 593-598 |
Number of pages | 6 |
ISBN (Electronic) | 9782875870827 |
DOIs | |
Publication status | Published - 2021 |
MoE publication type | A4 Article in a conference publication |
Event | European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning - Virtual, Online, Bruges, Belgium Duration: 6 Oct 2021 → 8 Oct 2021 Conference number: 29 |
Conference
Conference | European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning |
---|---|
Abbreviated title | ESANN |
Country/Territory | Belgium |
City | Bruges |
Period | 06/10/2021 → 08/10/2021 |
Keywords
- CIU
- XAI
- weighted adaptive sampling
- , black-box explanations