SubIQ: Inverse Soft-Q Learning for Offline Imitation with Suboptimal Demonstrations
CoRR(2024)
摘要
We consider offline imitation learning (IL), which aims to mimic the expert's
behavior from its demonstration without further interaction with the
environment. One of the main challenges in offline IL is dealing with the
limited support of expert demonstrations that cover only a small fraction of
the state-action spaces. In this work, we consider offline IL, where expert
demonstrations are limited but complemented by a larger set of sub-optimal
demonstrations of lower expertise levels. Most of the existing offline IL
methods developed for this setting are based on behavior cloning or
distribution matching, where the aim is to match the occupancy distribution of
the imitation policy with that of the expert policy. Such an approach often
suffers from over-fitting, as expert demonstrations are limited to accurately
represent any occupancy distribution. On the other hand, since sub-optimal sets
are much larger, there is a high chance that the imitation policy is trained
towards sub-optimal policies. In this paper, to address these issues, we
propose a new approach based on inverse soft-Q learning, where a regularization
term is added to the training objective, with the aim of aligning the learned
rewards with a pre-assigned reward function that allocates higher weights to
state-action pairs from expert demonstrations, and lower weights to those from
lower expertise levels. On standard benchmarks, our inverse soft-Q learning
significantly outperforms other offline IL baselines by a large margin.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要