Cross-scale contrastive triplet networks for graph representation learning

PATTERN RECOGNITION(2024)

引用 1|浏览6
暂无评分
摘要
Graph representation learning aims to learn low-dimensional representation for the graph, which has played a vital role in real-world applications. Without requiring additional labeled data, contrastive learning based graph representation learning (or graph contrastive learning) has attracted considerable attention. Recently, one of the most exciting advancement in graph contrastive learning is Deep Graph Infomax (DGI), which maximizes the Mutual Information (MI) between the node and graph representations. However, DGI only considers the contextual node information, ignoring the intrinsic node information (i.e., the similarity between node representations in different views). In this paper, we propose a novel Cross-scale Contrastive Triplet Networks (CCTN) framework, which captures both contextual and intrinsic node information for graph representation learning. Specifically, to obtain the contextual node information, we utilize an infomax contrastive network to maximize the MI between node and graph representations. For acquiring the intrinsic node information, we present a Siamese contrastive network by maximizing the similarity between node representations in different augmented views. Two contrastive networks learn together through a shared graph convolution network to form our cross-scale contrastive triplet networks. Finally, we evaluate CCTN on six real-world datasets. Extensive experimental results demonstrate that CCTN achieves state-of-the-art performance on node classification and clustering tasks.
更多
查看译文
关键词
Graph contrastive learning,Contextual contrastive network,Intrinsic contrastive network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要