Learning Vision-based Robotic Manipulation Tasks Sequentially in Offline Reinforcement Learning Settings
arxiv(2023)
摘要
With the rise of deep reinforcement learning (RL) methods, many complex
robotic manipulation tasks are being solved. However, harnessing the full power
of deep learning requires large datasets. Online-RL does not suit itself
readily into this paradigm due to costly and time-taking agent environment
interaction. Therefore recently, many offline-RL algorithms have been proposed
to learn robotic tasks. But mainly, all such methods focus on a single task or
multi-task learning, which requires retraining every time we need to learn a
new task. Continuously learning tasks without forgetting previous knowledge
combined with the power of offline deep-RL would allow us to scale the number
of tasks by keep adding them one-after-another. In this paper, we investigate
the effectiveness of regularisation-based methods like synaptic intelligence
for sequentially learning image-based robotic manipulation tasks in an
offline-RL setup. We evaluate the performance of this combined framework
against common challenges of sequential learning: catastrophic forgetting and
forward knowledge transfer. We performed experiments with different task
combinations to analyze the effect of task ordering. We also investigated the
effect of the number of object configurations and density of robot
trajectories. We found that learning tasks sequentially helps in the
propagation of knowledge from previous tasks, thereby reducing the time
required to learn a new task. Regularisation based approaches for continuous
learning like the synaptic intelligence method although helps in mitigating
catastrophic forgetting but has shown only limited transfer of knowledge from
previous tasks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要