Chrome Extension
WeChat Mini Program
Use on ChatGLM

Boosting of Thoughts: Trial-and-Error Problem Solving with Large Language Models

ICLR 2024(2024)

Cited 1|Views37
No score
Abstract
The reasoning performance of Large Language Models (LLMs) on a wide range ofproblems critically relies on chain-of-thought prompting, which involvesproviding a few chain of thought demonstrations as exemplars in prompts. Recentwork, e.g., Tree of Thoughts, has pointed out the importance of exploration andself-evaluation in reasoning step selection for complex problem solving. Inthis paper, we present Boosting of Thoughts (BoT), an automated promptingframework for problem solving with LLMs by iteratively exploring andself-evaluating many trees of thoughts in order to acquire an ensemble oftrial-and-error reasoning experiences, which will serve as a new form ofprompting to solve the complex problem. Starting from a simple prompt withoutrequiring examples, BoT iteratively explores and evaluates a large collectionof reasoning steps, and more importantly, uses error analysis obtained from theLLM on them to explicitly revise prompting, which in turn enhances reasoningstep generation, until a final answer is attained. Our experiments with GPT-4and Llama2 across extensive complex mathematical problems demonstrate that BoTconsistently achieves higher or comparable problem-solving rates than otheradvanced prompting approaches.
More
Translated text
Key words
Large Language Models,Prompt Engineering,Boosting Mechanism
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined