AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We show how GNNEXPLAINER can leverage recursive neighborhood-aggregation scheme of graph neural networks to identify important graph pathways as well as highlight relevant node feature information that is passed along edges of the pathways

GNNExplainer: Generating Explanations for Graph Neural Networks

ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), (2019): 9240-9251

Cited by: 95|Views562
EI WOS

Abstract

Graph Neural Networks (GNNs) are a powerful tool for machine learning on graphs. GNNs combine node feature information with the graph structure by recursively passing neural messages along edges of the input graph. However, incorporating both graph structure and feature information leads to complex models and explaining predictions made b...More

Code:

Data:

0
Introduction
  • In many real-world applications, including social, information, chemical, and biological domains, data can be naturally modeled as graphs [9, 41, 49].
  • Graphs are powerful data representations but are challenging to work with because they require modeling of rich relational information as well as node feature information [45, 46]
  • To address this challenge, Graph Neural Networks (GNNs) have emerged as state-of-the-art for machine learning on graphs, due to their ability to recursively incorporate information from neighboring nodes in the graph, naturally capturing both graph structure and node features [16, 21, 40, 44].
  • The GNN model Φ learns a conditional distribution PΦ(Y |Gc, Xc), where Y is a random variable representing labels {1, . . . , C}, indicating the probability of nodes belonging to each of C classes
Highlights
  • In many real-world applications, including social, information, chemical, and biological domains, data can be naturally modeled as graphs [9, 41, 49]
  • The ability to understand Graph Neural Networks’s predictions is important and useful for several reasons: (i) it can increase trust in the Graph Neural Networks model, (ii) it improves model’s transparency in a growing number of decision-critical applications pertaining to fairness, privacy and other safety challenges [11], and (iii) it allows practitioners to get an understanding of the network characteristics, identify and correct systematic patterns of mistakes made by models before deploying them in the real world
  • We investigate questions: Does GNNEXPLAINER provide sensible explanations? How do explanations compare to the ground-truth knowledge? How does GNNEXPLAINER perform on various graph-based prediction tasks? Can it explain predictions made by different Graph Neural Networks?
  • We present GNNEXPLAINER, a novel method for explaining predictions of any Graph Neural Networks on any graphbased machine learning task without requiring modification of the underlying Graph Neural Networks architecture or re-training
  • We show how GNNEXPLAINER can leverage recursive neighborhood-aggregation scheme of graph neural networks to identify important graph pathways as well as highlight relevant node feature information that is passed along edges of the pathways
  • On synthetic graphs with planted network motifs, which play a role in determining node labels, we show that GNNEXPLAINER accurately identifies the subgraphs/motifs as well as node features that determine node labels outperforming alternative baseline approaches by up to 43.0% in explanation accuracy
  • While the problem of explainability of machine-learning predictions has received substantial attention in recent literature, our work is unique in the sense that it presents an approach that operates on relational structures—graphs with rich node features—and provides a straightforward interface for making sense out of Graph Neural Networks predictions, debugging Graph Neural Networks models, and identifying systematic patterns of mistakes
Methods
  • Methods and Applications

    arXiv:1812.08434, 2018. [47] J.
  • ArXiv:1812.08434, 2018.
  • Loza Mencia, and F.
  • DeepRED - Rule Extraction from Deep Neural.
  • In Discovery Science.
  • Springer International Publishing, 2016.
  • Visualizing deep neural network decisions: Prediction difference analysis.
  • In ICLR, 2017.
  • Modeling polypharmacy side effects with graph convolutional networks.
  • Bioinformatics, 34, 2018
Results
  • The authors investigate questions: Does GNNEXPLAINER provide sensible explanations? How do explanations compare to the ground-truth knowledge? How does GNNEXPLAINER perform on various graph-based prediction tasks? Can it explain predictions made by different GNNs?

    1) Quantitative analyses.
  • The authors have ground-truth explanations for synthetic datasets and the authors use them to calculate explanation accuracy for all explanation methods.
  • The authors formalize the explanation problem as a binary classification task, where edges in the ground-truth explanation are treated as labels and importance weights given by explainability method are viewed as prediction scores.
  • A better explainability method predicts high scores for edges that are in the ground-truth explanation, and achieves higher explanation accuracy.
  • GNNEXPLAINER achieves up to 43.0% higher accuracy on the hardest TREE-GRID dataset
Conclusion
  • The authors present GNNEXPLAINER, a novel method for explaining predictions of any GNN on any graphbased machine learning task without requiring modification of the underlying GNN architecture or re-training.
  • While the problem of explainability of machine-learning predictions has received substantial attention in recent literature, the work is unique in the sense that it presents an approach that operates on relational structures—graphs with rich node features—and provides a straightforward interface for making sense out of GNN predictions, debugging GNN models, and identifying systematic patterns of mistakes
Summary
  • Introduction:

    In many real-world applications, including social, information, chemical, and biological domains, data can be naturally modeled as graphs [9, 41, 49].
  • Graphs are powerful data representations but are challenging to work with because they require modeling of rich relational information as well as node feature information [45, 46]
  • To address this challenge, Graph Neural Networks (GNNs) have emerged as state-of-the-art for machine learning on graphs, due to their ability to recursively incorporate information from neighboring nodes in the graph, naturally capturing both graph structure and node features [16, 21, 40, 44].
  • The GNN model Φ learns a conditional distribution PΦ(Y |Gc, Xc), where Y is a random variable representing labels {1, . . . , C}, indicating the probability of nodes belonging to each of C classes
  • Objectives:

    The number of parameters in GNNEXPLAINER’s optimization depends on the size of computation graph Gc for node v whose prediction the authors aim to explain
  • Methods:

    Methods and Applications

    arXiv:1812.08434, 2018. [47] J.
  • ArXiv:1812.08434, 2018.
  • Loza Mencia, and F.
  • DeepRED - Rule Extraction from Deep Neural.
  • In Discovery Science.
  • Springer International Publishing, 2016.
  • Visualizing deep neural network decisions: Prediction difference analysis.
  • In ICLR, 2017.
  • Modeling polypharmacy side effects with graph convolutional networks.
  • Bioinformatics, 34, 2018
  • Results:

    The authors investigate questions: Does GNNEXPLAINER provide sensible explanations? How do explanations compare to the ground-truth knowledge? How does GNNEXPLAINER perform on various graph-based prediction tasks? Can it explain predictions made by different GNNs?

    1) Quantitative analyses.
  • The authors have ground-truth explanations for synthetic datasets and the authors use them to calculate explanation accuracy for all explanation methods.
  • The authors formalize the explanation problem as a binary classification task, where edges in the ground-truth explanation are treated as labels and importance weights given by explainability method are viewed as prediction scores.
  • A better explainability method predicts high scores for edges that are in the ground-truth explanation, and achieves higher explanation accuracy.
  • GNNEXPLAINER achieves up to 43.0% higher accuracy on the hardest TREE-GRID dataset
  • Conclusion:

    The authors present GNNEXPLAINER, a novel method for explaining predictions of any GNN on any graphbased machine learning task without requiring modification of the underlying GNN architecture or re-training.
  • While the problem of explainability of machine-learning predictions has received substantial attention in recent literature, the work is unique in the sense that it presents an approach that operates on relational structures—graphs with rich node features—and provides a straightforward interface for making sense out of GNN predictions, debugging GNN models, and identifying systematic patterns of mistakes
Tables
  • Table1: Illustration of synthetic datasets (refer to “Synthetic datasets” for details) together with performance evaluation of GNNEXPLAINER and alternative baseline explainability approaches
Download tables as Excel
Related work
  • Although the problem of explaining GNNs is not well-studied, the related problems of interpretability and neural debugging received substantial attention in machine learning. At a high level, we can group those interpretability methods for non-graph neural networks into two main families. A B
Funding
  • We gratefully acknowledge the support of DARPA under FA865018C7880 (ASED) and MSC; NIH under No U54EB020405 (Mobilize); ARO under No 38796-Z8424103 (MURI); IARPA under No 2017-17071900005 (HFC), NSF under No OAC-1835598 (CINES) and HDR; Stanford Data Science Initiative, Chan Zuckerberg Biohub, JD.com, Amazon, Boeing, Docomo, Huawei, Hitachi, Observe, Siemens, UST Global
Reference
  • A. Adadi and M. Berrada. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6:52138–52160, 2018.
    Google ScholarLocate open access versionFindings
  • J. Adebayo, J. Gilmer, M. Muelly, I. Goodfellow, M. Hardt, and B. Kim. Sanity checks for saliency maps. In NeurIPS, 2018.
    Google ScholarLocate open access versionFindings
  • M. Gethsiyal Augasta and T. Kathirvalavakumar. Reverse Engineering the Neural Networks for Rule Extraction in Classification Problems. Neural Processing Letters, 35(2):131–150, April 2012.
    Google ScholarLocate open access versionFindings
  • Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv:1806.01261, 2018.
    Findings
  • J. Chen, J. Zhu, and L. Song. Stochastic training of graph convolutional networks with variance reduction. In ICML, 2018.
    Google ScholarLocate open access versionFindings
  • Jianbo Chen, Le Song, Martin J Wainwright, and Michael I Jordan. Learning to explain: An information-theoretic perspective on model interpretation. arXiv preprint arXiv:1802.07814, 2018.
    Findings
  • Jie Chen, Tengfei Ma, and Cao Xiao. Fastgcn: fast learning with graph convolutional networks via importance sampling. In ICLR, 2018.
    Google ScholarLocate open access versionFindings
  • Z. Chen, L. Li, and J. Bruna. Supervised community detection with line graph neural networks. In ICLR, 2019.
    Google ScholarLocate open access versionFindings
  • E. Cho, S. Myers, and J. Leskovec. Friendship and mobility: user movement in location-based social networks. In KDD, 2011.
    Google ScholarLocate open access versionFindings
  • A. Debnath et al. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of Medicinal Chemistry, 34(2):786–797, 1991.
    Google ScholarLocate open access versionFindings
  • F. Doshi-Velez and B. Kim. Towards A Rigorous Science of Interpretable Machine Learning. 2017. arXiv: 1702.08608.
    Findings
  • D. Duvenaud et al. Convolutional networks on graphs for learning molecular fingerprints. In NIPS, 2015.
    Google ScholarLocate open access versionFindings
  • D. Erhan, Y. Bengio, A. Courville, and P. Vincent. Visualizing higher-layer features of a deep network. University of Montreal, 1341(3):1, 2009.
    Google ScholarLocate open access versionFindings
  • A. Fisher, C. Rudin, and F. Dominici. All Models are Wrong but many are Useful: Variable Importance for Black-Box, Proprietary, or Misspecified Prediction Models, using Model Class Reliance. January 2018. arXiv: 1801.01489.
    Findings
  • R. Guidotti et al. A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv., 51(5):93:1–93:42, 2018.
    Google ScholarLocate open access versionFindings
  • W. Hamilton, Z. Ying, and J. Leskovec. Inductive representation learning on large graphs. In NIPS, 2017.
    Google ScholarLocate open access versionFindings
  • G. Hooker. Discovering additive structure in black box functions. In KDD, 2004.
    Google ScholarLocate open access versionFindings
  • W.B. Huang, T. Zhang, Y. Rong, and J. Huang. Adaptive sampling towards fast graph representation learning. In NeurIPS, 2018.
    Google ScholarLocate open access versionFindings
  • Bo Kang, Jefrey Lijffijt, and Tijl De Bie. Explaine: An approach for explaining network embedding-based link predictions. arXiv:1904.12694, 2019.
    Findings
  • Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In NeurIPS, 2013.
    Google ScholarLocate open access versionFindings
  • T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2016.
    Google ScholarLocate open access versionFindings
  • Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel. Neural relational inference for interacting systems. In ICML, 2018.
    Google ScholarLocate open access versionFindings
  • P. W. Koh and P. Liang. Understanding black-box predictions via influence functions. In ICML, 2017.
    Google ScholarLocate open access versionFindings
  • Srijan Kumar, William L Hamilton, Jure Leskovec, and Dan Jurafsky. Community interaction and conflict on the web. In WWW, pages 933–943, 2018.
    Google ScholarLocate open access versionFindings
  • H. Lakkaraju, E. Kamar, R. Caruana, and J. Leskovec. Interpretable & Explorable Approximations of Black Box Models, 2017.
    Google ScholarLocate open access versionFindings
  • Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel. Gated graph sequence neural networks. arXiv:1511.05493, 2015.
    Findings
  • S. Lundberg and Su-In Lee. A Unified Approach to Interpreting Model Predictions. In NIPS, 2017.
    Google ScholarLocate open access versionFindings
  • D. Neil et al. Interpretable Graph Convolutional Neural Networks for Inference on Noisy Knowledge Graphs. In ML4H Workshop at NeurIPS, 2018.
    Google ScholarLocate open access versionFindings
  • M. Ribeiro, S. Singh, and C. Guestrin. Why should i trust you?: Explaining the predictions of any classifier. In KDD, 2016.
    Google ScholarLocate open access versionFindings
  • G. J. Schmitz, C. Aldrich, and F. S. Gouws. ANN-DT: an algorithm for extraction of decision trees from artificial neural networks. IEEE Transactions on Neural Networks, 1999.
    Google ScholarLocate open access versionFindings
  • A. Shrikumar, P. Greenside, and A. Kundaje. Learning Important Features Through Propagating Activation Differences. In ICML, 2017.
    Google ScholarLocate open access versionFindings
  • M. Sundararajan, A. Taly, and Q. Yan. Axiomatic Attribution for Deep Networks. In ICML, 2017.
    Google ScholarLocate open access versionFindings
  • P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio. Graph attention networks. In ICLR, 2018.
    Google ScholarLocate open access versionFindings
  • T. Xie and J. Grossman. Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. In Phys. Rev. Lett., 2018.
    Google ScholarLocate open access versionFindings
  • K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? In ICRL, 2019.
    Google ScholarLocate open access versionFindings
  • K. Xu, C. Li, Y. Tian, T. Sonobe, K. Kawarabayashi, and S. Jegelka. Representation learning on graphs with jumping knowledge networks. In ICML, 2018.
    Google ScholarLocate open access versionFindings
  • Pinar Yanardag and SVN Vishwanathan. Deep graph kernels. In KDD, pages 1365–1374. ACM, 2015.
    Google ScholarLocate open access versionFindings
  • C. Yeh, J. Kim, I. Yen, and P. Ravikumar. Representer point selection for explaining deep neural networks. In NeurIPS, 2018.
    Google ScholarLocate open access versionFindings
  • R. Ying, R. He, K. Chen, P. Eksombatchai, W. Hamilton, and J. Leskovec. Graph convolutional neural networks for web-scale recommender systems. In KDD, 2018.
    Google ScholarLocate open access versionFindings
  • Z. Ying, J. You, C. Morris, X. Ren, W. Hamilton, and J. Leskovec. Hierarchical graph representation learning with differentiable pooling. In NeurIPS, 2018.
    Google ScholarLocate open access versionFindings
  • J. You, B. Liu, R. Ying, V. Pande, and J. Leskovec. Graph convolutional policy network for goal-directed molecular graph generation. 2018.
    Google ScholarFindings
  • J. You, Rex Ying, and J. Leskovec. Position-aware graph neural networks. In ICML, 2019.
    Google ScholarLocate open access versionFindings
  • M. Zeiler and R. Fergus. Visualizing and Understanding Convolutional Networks. In ECCV.
    Google ScholarFindings
  • M. Zhang and Y. Chen. Link prediction based on graph neural networks. In NIPS, 2018.
    Google ScholarLocate open access versionFindings
  • Z. Zhang, Peng C., and W. Zhu. Deep Learning on Graphs: A Survey. arXiv:1812.04202, 2018.
    Findings
  • J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, and M. Sun. Graph Neural Networks: A Review of Methods and Applications. arXiv:1812.08434, 2018.
    Findings
  • J. Zilke, E. Loza Mencia, and F. Janssen. DeepRED - Rule Extraction from Deep Neural Networks. In Discovery Science. Springer International Publishing, 2016.
    Google ScholarFindings
  • L. Zintgraf, T. Cohen, T. Adel, and M. Welling. Visualizing deep neural network decisions: Prediction difference analysis. In ICLR, 2017.
    Google ScholarLocate open access versionFindings
  • M. Zitnik, M. Agrawal, and J. Leskovec. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, 34, 2018.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科