View-based Explanations for Graph Neural Networks (Extended Abstract).
IEEE International Conference on Data Engineering(2024)
Abstract
Generating explanations for graph neural networks (GNNs) is a crucial aspect to understand their decision-making processes, especially for complex analytical tasks such as graph classification [1]–[3]. Existing approaches [4]–[13] in this field are limited to providing explanations for individual instances or specific class labels. The main focus of these methods is on defining explanations as crucial input features, often in the shape of numerical encoding [14]. These methods often fall short in providing targeted and configurable explanations for multiple class labels of interest. Additionally, existing methods may return large or an excessive number of explanation structures, hence are not easily comprehensible. Moreover, these explanation structures often lack direct accessibility and cannot be queried easily, posing a challenge for expert users who seek to inspect the specific reasoning behind a GNN’s decision based on domain knowledge.
MoreTranslated text
Key words
deep learning,graph neural networks,explainable AI,graph views,data mining,approximation algorithm
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined