General Purpose Verification for Chain of Thought Prompting
arxiv(2024)
摘要
Many of the recent capabilities demonstrated by Large Language Models (LLMs)
arise primarily from their ability to exploit contextual information. In this
paper, we explore ways to improve reasoning capabilities of LLMs through (1)
exploration of different chains of thought and (2) validation of the individual
steps of the reasoning process. We propose three general principles that a
model should adhere to while reasoning: (i) Relevance, (ii) Mathematical
Accuracy, and (iii) Logical Consistency. We apply these constraints to the
reasoning steps generated by the LLM to improve the accuracy of the final
generation. The constraints are applied in the form of verifiers: the model
itself is asked to verify if the generated steps satisfy each constraint. To
further steer the generations towards high-quality solutions, we use the
perplexity of the reasoning steps as an additional verifier. We evaluate our
method on 4 distinct types of reasoning tasks, spanning a total of 9 different
datasets. Experiments show that our method is always better than vanilla
generation, and, in 6 out of the 9 datasets, it is better than best-of N
sampling which samples N reasoning chains and picks the lowest perplexity
generation.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要