The Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning
CoRR(2024)
摘要
Offline reinforcement learning aims to enable agents to be trained from
pre-collected datasets, however, this comes with the added challenge of
estimating the value of behavior not covered in the dataset. Model-based
methods offer a solution by allowing agents to collect additional synthetic
data via rollouts in a learned dynamics model. The prevailing theoretical
understanding is that this can then be viewed as online reinforcement learning
in an approximate dynamics model, and any remaining gap is therefore assumed to
be due to the imperfect dynamics model. Surprisingly, however, we find that if
the learned dynamics model is replaced by the true error-free dynamics,
existing model-based methods completely fail. This reveals a major
misconception. Our subsequent investigation finds that the general procedure
used in model-based algorithms results in the existence of a set of
edge-of-reach states which trigger pathological value overestimation and
collapse in Bellman-based algorithms. We term this the edge-of-reach problem.
Based on this, we fill some gaps in existing theory and also explain how prior
model-based methods are inadvertently addressing the true underlying
edge-of-reach problem. Finally, we propose Reach-Aware Value Learning (RAVL), a
simple and robust method that directly addresses the edge-of-reach problem and
achieves strong performance across both proprioceptive and pixel-based
benchmarks. Code open-sourced at: https://github.com/anyasims/edge-of-reach.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要