A Vision-Guided Multi-Robot Cooperation Framework for Learning-by-Demonstration and Task Reproduction

2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2017)

引用 25|浏览30
暂无评分
摘要
This paper presents a vision-based learning-by-demonstration approach to enable robots to learn and complete a manipulation task cooperatively. With this method, a vision system is involved in both the task demonstration and reproduction stages. An expert first demonstrates how to use tools to perform a task, while the tool motion is observed using a vision system. The demonstrations are then encoded using a statistical model to generate a reference motion trajectory. Equipped with the same tools and the learned model, the robot is guided by vision to reproduce the task. The task performance was evaluated in terms of both accuracy and speed. However, simply increasing the robot's speed could decrease the reproduction accuracy. To this end, a dual-rate Kalman filter is employed to compensate for latency between the robot and vision system. More importantly, the sampling rates of the reference trajectory and the robot speed are optimised adaptively according to the learned motion model. We demonstrate the effectiveness of our approach by performing two tasks: a trajectory reproduction task and a bimanual sewing task. We show that using our vision-based approach, the robots can conduct effective learning by demonstrations and perform accurate and fast task reproduction. The proposed approach is generalisable to other manipulation tasks, where bimanual or multi-robot cooperation is required.
更多
查看译文
关键词
bimanual sewing task,trajectory reproduction task,learned motion model,robot speed,reproduction accuracy,reference motion trajectory,statistical model,vision system,multirobot manipulation,learning-by-demonstration approach,multirobot cooperation framework,fast task reproduction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要