Adaptively Attending to Visual Attributes and Linguistic Knowledge for Captioning.

MM '17: ACM Multimedia Conference Mountain View California USA October, 2017(2017)

引用 41|浏览86
暂无评分
摘要
Visual content description has been attracting broad research attention in multimedia community because it deeply uncovers intrinsic semantic facet of visual data. Most existing approaches formulate visual captioning as machine translation task (i.e., from vision to language) via a top-down paradigm with global attention, which ignores to distinguish visual and non-visual parts during word generation. In this work, we propose a novel adaptive attention strategy for visual captioning, which can selectively attend to salient visual content based on linguistic knowledge. Specifically, we design a key control unit, termed visual gate, to adaptively decide "when" and "what" the language generator attend to during the word generation process. We map all the preceding outputs of language generator into a latent space to derive the representation of sentence structures, which assists the "visual gate" to choose appropriate attention timing. Meanwhile, we employ a bottom-up workflow to learn a pool of semantic attributes for serving as the propositional attention resources. We evaluate the proposed approach on two commonly-used benchmarks, i.e., MSCOCO and MSVD. The experimental results demonstrate the superiority of our proposed approach compared to several state-of-the-art methods.
更多
查看译文
关键词
captioning, adaptive attention, attribute, linguistic knowledge
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要