SMORE: Score Models for Offline Goal-Conditioned Reinforcement Learning
ICLR 2024(2023)
摘要
Offline Goal-Conditioned Reinforcement Learning (GCRL) is tasked with
learning to achieve multiple goals in an environment purely from offline
datasets using sparse reward functions. Offline GCRL is pivotal for developing
generalist agents capable of leveraging pre-existing datasets to learn diverse
and reusable skills without hand-engineering reward functions. However,
contemporary approaches to GCRL based on supervised learning and contrastive
learning are often suboptimal in the offline setting. An alternative
perspective on GCRL optimizes for occupancy matching, but necessitates learning
a discriminator, which subsequently serves as a pseudo-reward for downstream
RL. Inaccuracies in the learned discriminator can cascade, negatively
influencing the resulting policy. We present a novel approach to GCRL under a
new lens of mixture-distribution matching, leading to our discriminator-free
method: SMORe. The key insight is combining the occupancy matching perspective
of GCRL with a convex dual formulation to derive a learning objective that can
better leverage suboptimal offline data. SMORe learns scores or unnormalized
densities representing the importance of taking an action at a state for
reaching a particular goal. SMORe is principled and our extensive experiments
on the fully offline GCRL benchmark composed of robot manipulation and
locomotion tasks, including high-dimensional observations, show that SMORe can
outperform state-of-the-art baselines by a significant margin.
更多查看译文
关键词
Robot Learning,Goal-Conditioned Reinforcement Learning,Deep Reinforcement Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要