Generation and Comprehension of Unambiguous Object Descriptions

2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2016)

引用 1137|浏览311
暂无评分
摘要
We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https://github.com/ mjhucla/Google_Refexp_toolbox.
更多
查看译文
关键词
unambiguous object descriptions,deep learning,image captioning,MSCOCO,dataset visualization,dataset evaluation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要