A Principled Framework for Knowledge-enhanced Large Language Model.
CoRR(2023)
摘要
Large Language Models (LLMs) are versatile, yet they often falter in tasks
requiring deep and reliable reasoning due to issues like hallucinations,
limiting their applicability in critical scenarios. This paper introduces a
rigorously designed framework for creating LLMs that effectively anchor
knowledge and employ a closed-loop reasoning process, enhancing their
capability for in-depth analysis. We dissect the framework to illustrate the
contribution of each component to the LLMs' performance, offering a theoretical
assurance of improved reasoning under well-defined assumptions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要