Learning Invariant Representations of Graph Neural Networks via Cluster Generalization
NeurIPS(2024)
摘要
Graph neural networks (GNNs) have become increasingly popular in modeling
graph-structured data due to their ability to learn node representations by
aggregating local structure information. However, it is widely acknowledged
that the test graph structure may differ from the training graph structure,
resulting in a structure shift. In this paper, we experimentally find that the
performance of GNNs drops significantly when the structure shift happens,
suggesting that the learned models may be biased towards specific structure
patterns. To address this challenge, we propose the Cluster Information
Transfer (CIT) mechanism (Code available at
https://github.com/BUPT-GAMMA/CITGNN), which can learn invariant
representations for GNNs, thereby improving their generalization ability to
various and unknown test graphs with structure shift. The CIT mechanism
achieves this by combining different cluster information with the nodes while
preserving their cluster-independent information. By generating nodes across
different clusters, the mechanism significantly enhances the diversity of the
nodes and helps GNNs learn the invariant representations. We provide a
theoretical analysis of the CIT mechanism, showing that the impact of changing
clusters during structure shift can be mitigated after transfer. Additionally,
the proposed mechanism is a plug-in that can be easily used to improve existing
GNNs. We comprehensively evaluate our proposed method on three typical
structure shift scenarios, demonstrating its effectiveness in enhancing GNNs'
performance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要