Ripple Walk Training: A Subgraph-based Training Framework for Large and Deep Graph Neural Network
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2021)
摘要
Graph neural networks (GNNs) have achieved outstanding performance in learning graph-structured data and various tasks. However, many current GNNs suffer from three common problems when facing large-size graphs or using a deeper structure: neighbors explosion, node dependence, and oversmoothing. Such problems attribute to the data structures of the graph itself or the designing of the multi-layers GNNs framework, and can lead to low training efficiency and high space complexity. To deal with these problems, in this paper, we propose a general subgraph-based training framework, namely Ripple Walk Training (RWT), for deep and large graph neural networks. RWT samples subgraphs from the full graph to constitute a mini-batch, and the full GNN is updated based on the mini-batch gradient. We analyze the high-quality subgraphs to train GNNs in a theoretical way. A novel sampling method Ripple Walk Sampler works for sampling these high-quality subgraphs to constitute the mini-batch, which considers both the randomness and connectivity of the graph-structured data. Extensive experiments on different sizes of graphs demonstrate the effectiveness and efficiency of RWT in training various GNNs (GCN & GAT). Our code is released in the https://github.com/anonymous2review/RippleWalk.
更多查看译文
关键词
deep graph neural network,graph neural networks,graph-structured data,current GNNs,facing large-size graphs,deeper structure,data structures,multilayers GNNs framework,low training efficiency,high space complexity,general subgraph-based training framework,Ripple Walk Training,RWT samples subgraphs,mini-batch gradient,high-quality sub graphs,novel sampling method Ripple,high-quality subgraphs
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络