Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021(2021)

引用 238|浏览145
暂无评分
摘要
We study joint learning of Convolutional Neural Network (CNN) and Transformer for vision-language pre-training (VLPT) which aims to learn cross-modal alignments from millions of image-text pairs. State-of-the-art approaches extract salient image regions and align regions with words step-by-step. As region-based visual features usually represent parts of an image, it is challenging for existing vision-language models to fully understand the semantics from paired natural languages. In this paper, we propose SOHO to "Seeing Out of tHe bOx" that takes a whole image as input, and learns vision-language representation in an end-to-end manner. SOHO does not require bounding box annotations which enables inference 10 times faster than region-based approaches. In particular, SOHO learns to extract comprehensive yet compact image features through a visual dictionary (VD) that facilitates cross-modal understanding. VD is designed to represent consistent visual abstractions of similar semantics. It is updated on-the-fly and utilized in our proposed pre-training task Masked Visual Modeling (MVM). We conduct experiments on four well-established vision-language tasks by following standard VLPT settings. In particular, SOHO achieves absolute gains of 2.0% R@1 score on MSCOCO text retrieval 5k test split, 1.5% accuracy on NLVR 2 test-P split, 6.7% accuracy on SNLI-VE test split, respectively.
更多
查看译文
关键词
vision-language representation learning,joint learning,convolutional neural network,vision-language pretraining,image-text pairs,region-based visual features,paired natural languages,SOHO,visual dictionary,cross-modal understanding,standard VLPT settings,MSCOCO text retrieval 5k test split,seeing out of the box,end-to-end pretraining,CNN,cross-modal alignment learning,salient image region extraction,compact image feature extraction,consistent visual abstraction representation,MVM,NLVR2 test-P split,SNLI-VE test split,pretraining task masked visual modeling,temperature 5.0 K
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要