Aligned Image-Word Representations Improve Inductive Transfer Across Vision-Language Tasks

2017 IEEE International Conference on Computer Vision (ICCV)(2017)

引用 22|浏览132
暂无评分
摘要
An important goal of computer vision is to build systems that learn visual representations over time that can be applied to many tasks. In this paper, we investigate a vision-language embedding as a core representation and show that it leads to better cross-task transfer than standard multi-task learning. In particular, the task of visual recognition is aligned to the task of visual question answering by forcing each to use the same word-region embeddings. We show this leads to greater inductive transfer from recognition to VQA than standard multitask learning. Visual recognition also improves, especially for categories that have relatively few recognition training labels but appear often in the VQA setting. Thus, our paper takes a small step towards creating more general vision systems by showing the benefit of interpretable, flexible, and trainable core representations.
更多
查看译文
关键词
inductive transfer,cross-task transfer,core representation,vision-language embedding,visual representations,computer vision,vision-language tasks,aligned image-word representations,trainable core representations,interpretable core representations,general vision systems,standard multitask learning,word-region embeddings,visual question answering,visual recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要