Momentum Contrast for Unsupervised Visual Representation Learning

CVPR(2020)

引用 11138|浏览2575
暂无评分
摘要
We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.
更多
查看译文
关键词
unsupervised visual representation learning,Momentum Contrast,moving-averaged encoder,MoCo transfer,dictionary on-the-fly,PASCAL VOC,COCO,supervised representation learning,ImageNet classification,dictionary look-up,linear protocol
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要