VALOR-EVAL: Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models
arxiv(2024)
摘要
Large Vision-Language Models (LVLMs) suffer from hallucination issues,
wherein the models generate plausible-sounding but factually incorrect outputs,
undermining their reliability. A comprehensive quantitative evaluation is
necessary to identify and understand the extent of hallucinations in these
models. However, existing benchmarks are often limited in scope, focusing
mainly on object hallucinations. Furthermore, current evaluation methods
struggle to effectively address the subtle semantic distinctions between model
outputs and reference data, as well as the balance between hallucination and
informativeness. To address these issues, we introduce a multi-dimensional
benchmark covering objects, attributes, and relations, with challenging images
selected based on associative biases. Moreover, we propose an large language
model (LLM)-based two-stage evaluation framework that generalizes the popular
CHAIR metric and incorporates both faithfulness and coverage into the
evaluation. Experiments on 10 established LVLMs demonstrate that our evaluation
metric is more comprehensive and better correlated with humans than existing
work when evaluating on our challenging human annotated benchmark dataset. Our
work also highlights the critical balance between faithfulness and coverage of
model outputs, and encourages future works to address hallucinations in LVLMs
while keeping their outputs informative.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要