Attentive One-Shot Meta-Imitation Learning From Visual Demonstration

IEEE International Conference on Robotics and Automation(2022)

引用 2|浏览18
暂无评分
摘要
The ability to apply a previously-learned skill (e.g., pushing) to a new task (context or object) is an important requirement for new-age robots. An attempt is made to solve this problem in this paper by proposing a deep meta-imitation learning framework comprising of an attentive-embedding net-work and a control network, capable of learning a new task in an end-to-end manner while requiring only one or a few visual demonstrations. The feature embeddings learnt by incorporating spatial attention is shown to provide higher embedding and control accuracy compared to other state-of-the-art methods such as TecNet [7] and MIL [4]. The interaction between the embedding and the control networks is improved by using multiplicative skip-connections and is shown to overcome the overfitting of the trained model. The superiority of the proposed model is established through rigorous experimentation using a publicly available dataset and a new dataset created using PyBullet [36]. Several ablation studies have been carried out to justify the design choices.
更多
查看译文
关键词
visual demonstration,learning,one-shot,meta-imitation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要