Learning Counterfactual Explanation of Graph Neural Networks via Generative Flow Network

IEEE Transactions on Artificial Intelligence(2024)

引用 0|浏览2
暂无评分
摘要
Counterfactual subgraphs explain Graph Neural Networks (GNNs) by answering the question: “How would the prediction change if a certain subgraph were absent in the input instance?” The differentiable proxy adjacency matrix is prevalent in current counterfactual subgraph discovery studies due to its ability to avoid exhaustive edge searching. However, a prediction gap exists when feeding the proxy matrix with continuous values and the thresholded discrete adjacency matrix to GNNs, compromising the optimization of the subgraph generator. Furthermore, the end-to-end learning schema adopted in the subgraph generator limits the diversity of counterfactual subgraphs. To this end, we propose CF-GFNExplainer, a flow-based approach for learning counterfactual subgraphs. CF-GFNExplainer employs a policy network with a discrete edge removal schema to construct counterfactual subgraph generation trajectories. Additionally, we introduce a loss function designed to guide CF-GFNExplainer’s optimization. The discrete adjacency matrix generated in each trajectory eliminates the prediction gap, enhancing the validity of the learned subgraphs. Furthermore, the multi-trajectories sampling strategy adopted in CF-GFNExplainer results in diverse counterfactual subgraphs. Extensive experiments conducted on synthetic and real-world datasets demonstrate the effectiveness of the proposed method in terms of validity and diversity. The data and code of CF-GFNExplainer are available 1

https://github.com/AmGracee/CF-GFNExplainer

.
更多
查看译文
关键词
Graph Neural Networks,Model Explanation,Counterfactual Subgraph,Generative Flow Network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要