Principled Reinforcement Learning with Human Feedback from Pairwise or K-wise Comparisons
arxiv(2023)
摘要
We provide a theoretical framework for Reinforcement Learning with Human
Feedback (RLHF). Our analysis shows that when the true reward function is
linear, the widely used maximum likelihood estimator (MLE) converges under both
the Bradley-Terry-Luce (BTL) model and the Plackett-Luce (PL) model. However,
we show that when training a policy based on the learned reward model, MLE
fails while a pessimistic MLE provides policies with improved performance under
certain coverage assumptions. Additionally, we demonstrate that under the PL
model, the true MLE and an alternative MLE that splits the K-wise comparison
into pairwise comparisons both converge. Moreover, the true MLE is
asymptotically more efficient. Our results validate the empirical success of
existing RLHF algorithms in InstructGPT and provide new insights for algorithm
design. Furthermore, our results unify the problem of RLHF and max-entropy
Inverse Reinforcement Learning (IRL), and provide the first sample complexity
bound for max-entropy IRL.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要