Don't Throw Away Your Value Model! Generating More Preferable Text with Value-Guided Monte-Carlo Tree Search Decoding
arxiv
Abstract
Inference-time search algorithms such as Monte-Carlo Tree Search (MCTS) mayseem unnecessary when generating natural language text based onstate-of-the-art reinforcement learning such as Proximal Policy Optimization(PPO). In this paper, we demonstrate that it is possible to get extra mileageout of PPO by integrating MCTS on top. The key idea is not to throw out thevalue network, a byproduct of PPO training for evaluating partial outputsequences, when decoding text out of the policy network. More concretely, wepresent a novel value-guided decoding algorithm called PPO-MCTS, which canintegrate the value network from PPO to work closely with the policy networkduring inference-time generation. Compared to prior approaches based on MCTSfor controlled text generation, the key strength of our approach is to reducethe fundamental mismatch of the scoring mechanisms of the partial outputsbetween training and test. Evaluation on four text generation tasks demonstratethat PPO-MCTS greatly improves the preferability of generated text compared tothe standard practice of using only the PPO policy. Our results demonstrate thepromise of search algorithms even on top of the aligned language models fromPPO, and the under-explored benefit of the value network.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined