Explainability in Graph Neural Networks: Molecular Insights
Explaining how Graph Neural Networks reason is essential for validating their structural understanding. This project examines GNNExplainer for revealing which graph components drive model behavior, analyzing influential nodes, edges, and functional motifs, and evaluating explanation faithfulness through principled sparsification and substructure tests. The explainability method is applied to molecular graphs in the MUTAG dataset to uncover which atomic interactions and functional groups most strongly drive mutagenicity predictions, linking model reasoning to meaningful chemical insights.

Language
- English
Topic
- Artificial Intelligence
Skills You Will Learn
- Artificial Intelligence, Graph Neural Networks (GNNs), Molecular Modeling, Explainable Artificial Intelligence (XAI), Cheminformatics, PyTorch Geometric (PyG)
Offered By
- IBMSkillsNetwork
Estimated Effort
- 60 minutes
Platform
- SkillsNetwork
Last Update
- January 28, 2026
Who Is It For
What You’ll Learn
- Understand the core principles behind explaining Graph Neural Networks.
- Use GNNExplainer to identify influential substructures and graph components.
- Evaluate explanation quality using faithfulness-based tests.
What You'll Need

Language
- English
Topic
- Artificial Intelligence
Skills You Will Learn
- Artificial Intelligence, Graph Neural Networks (GNNs), Molecular Modeling, Explainable Artificial Intelligence (XAI), Cheminformatics, PyTorch Geometric (PyG)
Offered By
- IBMSkillsNetwork
Estimated Effort
- 60 minutes
Platform
- SkillsNetwork
Last Update
- January 28, 2026
Instructors
Zikai Dou
Data Scientist at IBM
Ph.D. Candidate in Computer Science at McMaster University, specializing in Federated Learning (FL), Graph Neural Networks (GNNs), and Computer Vision (CV). I develop privacy-preserving, distributed AI systems that tackle real-world challenges in healthcare, finance, and enterprise applications. Passionate about bridging academic research with industry impact to advance scalable and trustworthy AI.
Read moreContributors
Joseph Santarcangelo
Senior Data Scientist at IBM
Joseph has a Ph.D. in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
Read moreWojciech "Victor" Fulmyk
Data Scientist at IBM
Wojciech "Victor" Fulmyk is a Data Scientist and AI Engineer on IBM’s Skills Network team, where he focuses on helping learners build expertise in data science, artificial intelligence, and machine learning. He is also a Kaggle competition expert, currently ranked in the top 3% globally among competition participants. An economist by training, he applies his knowledge of statistics and econometrics to bring a distinctive perspective to AI and ML—one that considers both technical depth and broader socioeconomic implications.
Read more