Evaluating the External and Parametric Knowledge Fusion of Large Language Models
CoRR(2024)
Abstract
Integrating external knowledge into large language models (LLMs) presents a
promising solution to overcome the limitations imposed by their antiquated and
static parametric memory. Prior studies, however, have tended to over-reliance
on external knowledge, underestimating the valuable contributions of an LLMs'
intrinsic parametric knowledge. The efficacy of LLMs in blending external and
parametric knowledge remains largely unexplored, especially in cases where
external knowledge is incomplete and necessitates supplementation by their
parametric knowledge. We propose to deconstruct knowledge fusion into four
distinct scenarios, offering the first thorough investigation of LLM behavior
across each. We develop a systematic pipeline for data construction and
knowledge infusion to simulate these fusion scenarios, facilitating a series of
controlled experiments. Our investigation reveals that enhancing parametric
knowledge within LLMs can significantly bolster their capability for knowledge
integration. Nonetheless, we identify persistent challenges in memorizing and
eliciting parametric knowledge, and determining parametric knowledge
boundaries. Our findings aim to steer future explorations on harmonizing
external and parametric knowledge within LLMs.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined