Don't throw away your value model! Generating more preferable text with Value-Guided Monte-Carlo Tree Search decoding
arxiv(2023)
摘要
Inference-time search algorithms such as Monte-Carlo Tree Search (MCTS) may
seem unnecessary when generating natural language text based on
state-of-the-art reinforcement learning such as Proximal Policy Optimization
(PPO). In this paper, we demonstrate that it is possible to get extra mileage
out of PPO by integrating MCTS on top. The key idea is not to throw out the
value network, a byproduct of PPO training for evaluating partial output
sequences, when decoding text out of the policy network. More concretely, we
present a novel value-guided decoding algorithm called PPO-MCTS, which can
integrate the value network from PPO to work closely with the policy network
during inference-time generation. Compared to prior approaches based on MCTS
for controlled text generation, the key strength of our approach is to reduce
the fundamental mismatch of the scoring mechanisms of the partial outputs
between training and test. Evaluation on four text generation tasks demonstrate
that PPO-MCTS greatly improves the preferability of generated text compared to
the standard practice of using only the PPO policy. Our results demonstrate the
promise of search algorithms even on top of the aligned language models from
PPO, and the under-explored benefit of the value network.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要