Generating Explanations for Molecular Property Predictions in Graph Neural Networks

Avleen Malhi*, John Patrice Matekenya, Käry Främling

*Tämän työn vastaava kirjoittaja

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussaConference article in proceedingsScientificvertaisarvioitu

Abstrakti

Graph neural networks have helped researchers overcome the challenges of deep learning on graphs in non-Euclidean space. Like most deep learning algorithms, although the prediction of the models produces good results, explaining the predictions of the model is often challenging. This paper will focus on applying graph neural networks to predict the properties of the various molecules in the molecular datasets. The aim is to explore the generation of explanations for molecule property predictions. The four graph neural networks and seven explainers are chosen to generate and compare the quality of the explanations that are given by the explainers for each of the model predictions. The quality of this explanation is measured by sparsity, fidelity, and fidelity inverse. It is observed that various models find it difficult to learn the node embeddings when there is a class imbalance; despite the models achieving a 75% accuracy and the F1_Score was 66%. It is also observed that for all datasets, sparsity had a statistically significant effect on fidelity; that is, as more important features are masked, the quality of the explanation reduces. The effect of sparsity on fidelity inverse varied from dataset to dataset; as more unimportant features were masked, the quality of the explanations improved in some datasets, yet the change was not significant in other datasets. Finally, it was observed that the explanation quality differs across models. However, larger neural networks produced better predictions in our experiments, and the quality of the explanation of those predictions was not lower than that of smaller neural networks.

AlkuperäiskieliEnglanti
OtsikkoAdvances in Explainability, Agents, and Large Language Models
Alaotsikko1st International Workshop on Causality, Agents and Large Models, CALM 2024, Proceedings
ToimittajatYazan Mualla, Liuwen Yu, Davide Liga, Igor Tchappi, Réka Markovich
KustantajaSpringer
Sivut20-32
Sivumäärä13
ISBN (painettu)9783031891021
DOI - pysyväislinkit
TilaJulkaistu - 2025
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaInternational Workshop on Causality, Agents and Large Models - Kyoto, Japani
Kesto: 18 marrask. 202419 marrask. 2024
Konferenssinumero: 1

Julkaisusarja

NimiCommunications in Computer and Information Science
Vuosikerta2471 CCIS
ISSN (painettu)1865-0929
ISSN (elektroninen)1865-0937

Conference

ConferenceInternational Workshop on Causality, Agents and Large Models
LyhennettäCALM
Maa/AlueJapani
KaupunkiKyoto
Ajanjakso18/11/202419/11/2024

Sormenjälki

Sukella tutkimusaiheisiin 'Generating Explanations for Molecular Property Predictions in Graph Neural Networks'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä