Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback
CoRR(2024)
摘要
The rapidly developing Large Vision Language Models (LVLMs) have shown
notable capabilities on a range of multi-modal tasks, but still face the
hallucination phenomena where the generated texts do not align with the given
contexts, significantly restricting the usages of LVLMs. Most previous work
detects and mitigates hallucination at the coarse-grained level or requires
expensive annotation (e.g., labeling by proprietary models or human experts).
To address these issues, we propose detecting and mitigating hallucinations in
LVLMs via fine-grained AI feedback. The basic idea is that we generate a
small-size sentence-level hallucination annotation dataset by proprietary
models, whereby we train a hallucination detection model which can perform
sentence-level hallucination detection, covering primary hallucination types
(i.e., object, attribute, and relationship). Then, we propose a
detect-then-rewrite pipeline to automatically construct preference dataset for
training hallucination mitigating model. Furthermore, we propose
differentiating the severity of hallucinations, and introducing a Hallucination
Severity-Aware Direct Preference Optimization (HSA-DPO) for mitigating
hallucination in LVLMs by incorporating the severity of hallucinations into
preference learning. Extensive experiments demonstrate the effectiveness of our
method.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要