Pushing and grasping for autonomous learning of object models with foveated vision

2015 International Conference on Advanced Robotics (ICAR)(2015)

引用 3|浏览23
暂无评分
摘要
In this paper we address the problem of autonomous learning of visual appearance of unknown objects. We propose a method that integrates foveated vision on a humanoid robot with autonomous object discovery and explorative manipulation actions such as pushing, grasping, and in-hand rotation. The humanoid robot starts by searching for objects in a visual scene and generating hypotheses about which parts of the visual scene could constitute an object. The hypothetical objects are verified by applying pushing actions, where the existence of an object is considered confirmed if the visual features exhibit rigid body motion. In our previous work we showed that partial object models can be learnt by a sequential application of several robot pushes, which generates the views of object appearance from different viewpoints. However, with this approach it is not possible to guarantee that the object will be seen from all relevant viewpoints even after a large number of pushes have been carried out. Instead, in this paper we show that confirmed object hypotheses contain enough information to enable grasping and that object models can be acquired more effectively by sequentially rotating the object. We show the effectiveness of our new system by comparing object recognition results after the robot learns object models by two different approaches: 1. learning from images acquired by several pushes and 2. learning from images acquired by an initial push followed by several grasp-rotate-release action cycles.
更多
查看译文
关键词
autonomous learning,object model,foveated vision,visual appearance,unknown object,humanoid robot,autonomous object discovery,explorative manipulation action,visual scene,hypothetical object,pushing action,rigid body motion,sequential application,object appearance,object recognition,grasp-rotate-release action cycle
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要