AMBER: An LLM-free Multi-dimensional Benchmark for MLLMs Hallucination Evaluation
CoRR(2023)
摘要
Despite making significant progress in multi-modal tasks, current Multi-modal
Large Language Models (MLLMs) encounter the significant challenge of
hallucinations, which may lead to harmful consequences. Therefore, evaluating
MLLMs' hallucinations is becoming increasingly important in model improvement
and practical application deployment. Previous works are limited in high
evaluation costs (e.g., relying on humans or advanced LLMs) and insufficient
evaluation dimensions (e.g., types of tasks and hallucinations). In this paper,
we propose an LLM-free multi-dimensional benchmark AMBER, which can be used to
evaluate both generative task and discriminative task including existence,
attribute and relation hallucination. Based on AMBER, we design a low-cost and
efficient evaluation pipeline. Additionally, we conduct a comprehensive
evaluation and detailed analysis of mainstream MLLMs including GPT-4V(ision),
and also give guideline suggestions for mitigating hallucinations. The data and
code of AMBER are available at https://github.com/junyangwang0410/AMBER.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要