Analyzing the Behavior of Visual Question Answering Models.

EMNLP(2016)

引用 351|浏览643
暂无评分
摘要
Recently, a number of deep-learning based models have been proposed for the task of Visual Question Answering (VQA). The performance of most models is clustered around 60-70%. In this paper we propose systematic methods to analyze the behavior of these models as a first step towards recognizing their strengths and weaknesses, and identifying the most fruitful directions for progress. We analyze two models, one each from two major classes of VQA models -- with-attention and without-attention and show the similarities and differences in the behavior of these models. We also analyze the winning entry of the VQA Challenge 2016. Our behavior analysis reveals that despite recent progress, todayu0027s VQA models are myopic (tend to fail on sufficiently novel instances), often jump to conclusions (converge on a predicted answer after u0027listeningu0027 to just half the question), and are stubborn (do not change their answers across images).
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要