Chrome Extension
WeChat Mini Program
Use on ChatGLM

Navigating Uncertainty: Optimizing API Dependency for Hallucination Reduction in Closed-Book QA

Lecture notes in computer science(2024)

Cited 0|Views18
No score
Abstract
While Large Language Models (LLM) are able to accumulate and restoreknowledge, they are still prone to hallucination. Especially when faced withfactual questions, LLM cannot only rely on knowledge stored in parameters toguarantee truthful and correct answers. Augmenting these models with theability to search on external information sources, such as the web, is apromising approach to ground knowledge to retrieve information. However,searching in a large collection of documents introduces additionalcomputational/time costs. An optimal behavior would be to query externalresources only when the LLM is not confident about answers. In this paper, wepropose a new LLM able to self-estimate if it is able to answer directly orneeds to request an external tool. We investigate a supervised approach byintroducing a hallucination masking mechanism in which labels are generatedusing a close book question-answering task. In addition, we propose to leverageparameter-efficient fine-tuning techniques to train our model on a small amountof data. Our model directly provides answers for 78.2% of the known queriesand opts to search for 77.2% of the unknown ones. This results in the APIbeing utilized only 62% of the time.
More
Translated text
Key words
budgeted search,hallucination
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined