Enhancing scene-text visual question answering with relational reasoning, attention and dynamic vocabulary integration

COMPUTATIONAL INTELLIGENCE(2024)

引用 0|浏览0
暂无评分
摘要
Visual question answering (VQA) is a challenging task in computer vision. Recently, there has been a growing interest in text-based VQA tasks, emphasizing the important role of textual information for better understanding of images. Effectively utilizing text information within the image is crucial for achieving success in this task. However, existing approaches often overlook the contextual information and neglect to utilize the relationships between scene-text tokens and image objects. They simply incorporate the scene-text tokens mined from the image into the VQA model without considering these important factors. In this paper, the proposed model initially analyzes the image to extract text and identify scene objects. It then comprehends the question and mines relationships among the question, OCRed text, and scene objects, ultimately generating an answer through relational reasoning by conducting semantic and positional attention. Our decoder with attention map loss enables prediction of complex answers and handles dynamic vocabularies, reducing decoding space. It outperforms softmax-based cross entropy loss in accuracy and efficiency by accommodating varying vocabulary sizes. We evaluated our model's performance on the TextVQA dataset and achieved an accuracy of 53.91% on the validation set and 53.98% on the test set. Moreover, on the ST-VQA dataset, our model obtained ANLS scores of 0.699 on the validation set and 0.692 on the test set.
更多
查看译文
关键词
attention mechanism,computer vision,relational reasoning,semantic
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要