Offline Skill Generalization via Task and Motion Planning.
CoRR(2023)
摘要
This paper presents a novel approach to generalizing robot manipulation
skills by combining a sampling-based task-and-motion planner with an offline
reinforcement learning algorithm. Starting with a small library of scripted
primitive skills (e.g. Push) and object-centric symbolic predicates (e.g.
On(block, plate)), the planner autonomously generates a demonstration dataset
of manipulation skills in the context of a long-horizon task. An offline
reinforcement learning algorithm then extracts a policy from the dataset
without further interactions with the environment and replaces the scripted
skill in the existing library. Refining the skill library improves the
robustness of the planner, which in turn facilitates data collection for more
complex manipulation skills. We validate our approach in simulation, on a
block-pushing task. We show that the proposed method requires less training
data than conventional reinforcement learning methods. Furthermore, interaction
with the environment is collision-free because of the use of planner
demonstrations, making the approach more amenable to persistent robot learning
in the real world.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要