Graph representation learning via simple jumping knowledge networks

Applied Intelligence(2022)

引用 4|浏览14
暂无评分
摘要
Recent graph neural networks for graph representation learning depend on a neighborhood aggregation process. Several works focus on simplifying the neighborhood aggregation process and model structures. However, as the depth of the models increases, the simplified models will encounter oversmoothing, resulting in a decrease in model performance. Several works leverage sophisticated learnable neighborhood aggregation algorithms to learn more accurate graph representations. However, the high computational cost limits the depth of these models and the ability to tackle large graphs. In this paper, we propose simple jumping knowledge networks (SJK-Nets), which first leverage a simple no-learning method to complete the neighborhood aggregation process, and then utilize a jumping architecture to combine the different neighborhood ranges of each node to achieve a better structure-aware representation. Under such circumstances, first, we use a simple neighborhood aggregation algorithm to reduce computational complexity of the model. Then, we aggregate the features of high-order neighboring nodes to learn more informative node feature representations. Finally, by combining the above methods, the oversmoothing problem of the deep graph neural networks is alleviated. Our experimental evaluation demonstrates that SJK-Nets achieve or match state-of-the-art results in node classification tasks, text classification tasks, and community prediction tasks. Moreover, since SJK-Nets’ neighborhood aggregation is a no-learning process, SJK-Nets are successfully extended to node clustering tasks.
更多
查看译文
关键词
Graph neural networks, Graph representation learning, Neighborhood aggregation, Simple jumping knowledge networks, No-learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要