Skip : A Simple Method to Reduce Hallucination in Large Vision-Language Models
CoRR(2024)
摘要
Recent advancements in large vision-language models (LVLMs) have demonstrated
impressive capability in visual information understanding with human language.
Despite these advances, LVLMs still face challenges with multimodal
hallucination, such as generating text descriptions of objects that are not
present in the visual information. However, the underlying fundamental reasons
of multimodal hallucinations remain poorly explored. In this paper, we propose
a new perspective, suggesting that the inherent biases in LVLMs might be a key
factor in hallucinations. Specifically, we systematically identify a semantic
shift bias related to paragraph breaks (), where the content before and
after '' in the training data frequently exhibit significant semantic
changes. This pattern leads the model to infer that the contents following
'' should be obviously different from the preceding contents with less
hallucinatory descriptions, thereby increasing the probability of hallucinatory
descriptions subsequent to the ''. We have validated this hypothesis on
multiple publicly available LVLMs. Besides, we find that deliberately inserting
'' at the generated description can induce more hallucinations. A simple
method is proposed to effectively mitigate the hallucination of LVLMs by
skipping the output of ''.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要