Select Your Own Counterparts: Self-Supervised Graph Contrastive Learning With Positive Sampling

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS(2024)

引用 0|浏览13
暂无评分
摘要
Contrastive learning (CL) has emerged as a powerful approach for self-supervised learning. However, it suffers from sampling bias, which hinders its performance. While the mainstream solutions, hard negative mining (HNM) and supervised CL (SCL), have been proposed to mitigate this critical issue, they do not effectively address graph CL (GCL). To address it, we propose graph positive sampling (GPS) and three contrastive objectives. The former is a novel learning paradigm designed to leverage the inherent properties of graphs for improved GCL models, which utilizes four complementary similarity measurements, including node centrality, topological distance, neighborhood overlapping, and semantic distance, to select positive counterparts for each node. Notably, GPS operates without relying on true labels and enables preprocessing applications. The latter aims to fuse positive samples and enhance representative selection in the semantic space. We release three node-level models with GPS and conduct extensive experiments on public datasets. The results demonstrate the superiority of GPS over state-of-the-art (SOTA) baselines and debiasing methods. In addition, the GPS has also been proven to be versatile, adaptive, and flexible.
更多
查看译文
关键词
Graph contrastive learning (GCL),positive sampling,sampling bias,self-supervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要