S2GAE: Self-Supervised Graph Autoencoders are Generalizable Learners with Graph Masking.

WSDM(2023)

引用 27|浏览109
暂无评分
摘要
Self-supervised learning (SSL) has been demonstrated to be effective in pre-training models that can be generalized to various downstream tasks. Graph Autoencoder (GAE), an increasingly popular SSL approach on graphs, has been widely explored to learn node representations without ground-truth labels. However, recent studies show that existing GAE methods could only perform well on link prediction tasks, while their performance on classification tasks is rather limited. This limitation casts doubt on the generalizability and adoption of GAE. In this paper, for the first time, we show that GAE can generalize well to both link prediction and classification scenarios, including node-level and graph-level tasks, by redesigning its critical building blocks from the graph masking perspective. Our proposal is called Self-Supervised Graph Autoencoder--S2GAE, which unleashes the power of GAEs with minimal yet nontrivial efforts. Specifically, instead of reconstructing the whole input structure, we randomly mask a portion of edges and learn to reconstruct these missing edges with an effective masking strategy and an expressive decoder network. Moreover, we theoretically prove that S2GAE could be regarded as an edge-level contrastive learning framework, providing insights into why it generalizes well. Empirically, we conduct extensive experiments on 21 benchmark datasets across link prediction and node & graph classification tasks. The results validate the superiority of S2GAE against state-of-the-art generative and contrastive methods. This study demonstrates the potential of GAE as a universal representation learner on graphs. Our code is publicly available at https://github.com/qiaoyu-tan/S2GAE.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要