Visual Hallucination: Definition, Quantification, and Prescriptive Remediations
arxiv(2024)
摘要
The troubling rise of hallucination presents perhaps the most significant
impediment to the advancement of responsible AI. In recent times, considerable
research has focused on detecting and mitigating hallucination in Large
Language Models (LLMs). However, it's worth noting that hallucination is also
quite prevalent in Vision-Language models (VLMs). In this paper, we offer a
fine-grained discourse on profiling VLM hallucination based on two tasks: i)
image captioning, and ii) Visual Question Answering (VQA). We delineate eight
fine-grained orientations of visual hallucination: i) Contextual Guessing, ii)
Identity Incongruity, iii) Geographical Erratum, iv) Visual Illusion, v) Gender
Anomaly, vi) VLM as Classifier, vii) Wrong Reading, and viii) Numeric
Discrepancy. We curate Visual HallucInation eLiciTation (VHILT), a publicly
available dataset comprising 2,000 samples generated using eight VLMs across
two tasks of captioning and VQA along with human annotations for the categories
as mentioned earlier.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要