Hallucination is Inevitable: An Innate Limitation of Large Language Models
CoRR(2024)
Abstract
Hallucination has been widely recognized to be a significant drawback for
large language models (LLMs). There have been many works that attempt to reduce
the extent of hallucination. These efforts have mostly been empirical so far,
which cannot answer the fundamental question whether it can be completely
eliminated. In this paper, we formalize the problem and show that it is
impossible to eliminate hallucination in LLMs. Specifically, we define a formal
world where hallucination is defined as inconsistencies between a computable
LLM and a computable ground truth function. By employing results from learning
theory, we show that LLMs cannot learn all of the computable functions and will
therefore always hallucinate. Since the formal world is a part of the real
world which is much more complicated, hallucinations are also inevitable for
real world LLMs. Furthermore, for real world LLMs constrained by provable time
complexity, we describe the hallucination-prone tasks and empirically validate
our claims. Finally, using the formal world framework, we discuss the possible
mechanisms and efficacies of existing hallucination mitigators as well as the
practical implications on the safe deployment of LLMs.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined