Abstract
Explaining node predictions in graph neural networks (GNNs) often boils down to finding graph substructures that preserve predictions. Finding these structures usually implies back-propagating through the GNN, bonding the complexity (e.g., number of layers) of the GNN to the cost of explaining it. This naturally begs the question: Can we break this bond by explaining a simpler surrogate GNN? To answer the question, we propose Distill n' Explain (DnX). First, DnX learns a surrogate GNN via knowledge distillation. Then, DnX extracts node or edge-level explanations by solving a simple convex program. We also propose FastDnX, a faster version of DnX that leverages the linear decomposition of our surrogate model. Experiments show that DnX and FastDnX often outperform state-of-the-art GNN explainers while being orders of magnitude faster. Additionally, we support our empirical findings with theoretical results linking the quality of the surrogate model (i.e., distillation error) to the faithfulness of explanations.
Original language | English |
---|---|
Title of host publication | Proceedings of The 26th International Conference on Artificial Intelligence and Statistics (AISTATS) 2023 |
Editors | Francisco Ruiz, Jennifer Dy, Jan-Willem van de Meent |
Publisher | JMLR |
Pages | 6199-6214 |
Number of pages | 16 |
Publication status | Published - 2023 |
MoE publication type | A4 Conference publication |
Event | International Conference on Artificial Intelligence and Statistics - Valencia, Spain Duration: 25 Apr 2023 → 27 Apr 2023 Conference number: 26 http://aistats.org/aistats2023/ |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
Publisher | JMLR |
Volume | 206 |
ISSN (Print) | 2640-3498 |
Conference
Conference | International Conference on Artificial Intelligence and Statistics |
---|---|
Abbreviated title | AISTATS |
Country/Territory | Spain |
City | Valencia |
Period | 25/04/2023 → 27/04/2023 |
Internet address |