Information Re-Organization Improves Reasoning in Large Language Models
arxiv(2024)
摘要
Improving the reasoning capabilities of large language models (LLMs) has
attracted considerable interest. Recent approaches primarily focus on improving
the reasoning process to yield a more precise final answer. However, in
scenarios involving contextually aware reasoning, these methods neglect the
importance of first identifying logical relationships from the context before
proceeding with the reasoning. This oversight could lead to a superficial
understanding and interaction with the context, potentially undermining the
quality and reliability of the reasoning outcomes. In this paper, we propose an
information re-organization (InfoRE) method before proceeding with the
reasoning to enhance the reasoning ability of LLMs. We first perform a
re-organization processing of the contextual content, e.g., documents or
paragraphs, to obtain logical relationships. Then, we utilize the re-organized
information in the reasoning process. This enables LLMs to deeply understand
the contextual content by clearly perceiving these logical relationships. To
demonstrate the effectiveness of our approach in improving the reasoning
ability, we conduct experiments using Llama2-70B, GPT-3.5, and GPT-4 on various
contextually aware multi-hop reasoning tasks. Using only a zero-shot setting,
our method achieves an average improvement of 3% across all tasks,
highlighting its potential to improve the reasoning performance of LLMs. Our
source code is available at https://github.com/hustcxx/InfoRE.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要