Heterogeneous Contrastive Learning: Encoding Spatial Information for Compact Visual Representations

IEEE TRANSACTIONS ON MULTIMEDIA(2022)

引用 9|浏览106
暂无评分
摘要
Unsupervised pretraining is of great significance for visual representation. Especially, contrastive learning has achieved great success recently, but existing approaches mostly ignored spatial information which is often crucial for visual representation. Strong semantic embedding has an inherent advantage for classification, but dense prediction tasks require more spatial and low-level representation. This paper presents heterogeneous contrastive learning (HCL), an effective approach that adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations. We demonstrate the effectiveness of HCL by showing that (i) it achieves higher accuracy in instance discrimination, (ii) it surpasses existing pre-training methods in a series of downstream tasks (iii) and it shrinks the pre-training costs by half for almost 800 GPU-hours. More importantly, we show that our approach achieves higher efficiency in visual representations, and thus delivers a key message to inspire the future research of self-supervised visual representation learning.
更多
查看译文
关键词
Feature extraction, Semantics, Head, Contrastive learning, pre-training, spatial information
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要