Multimodal autonomous tool analyses and appropriate application

2011 11th IEEE-RAS International Conference on Humanoid Robots(2011)

引用 7|浏览13
暂无评分
摘要
In this work we propose a method to extract visual features from a tool in the hand of a robot to derive basic properties how to handle this tool correctly. We want to show how a robot can improve its accuracy in certain tasks by a visual exploration of geometric features. We also show methods to extend the proprioception of the robots arm to the new end effector including the tool. By a combination of 3D and 2D data, it is possible to extract features like geometric edges, flat surfaces and concavities. From those features we can distinguish several classes of objects and make basic measurements of potential contact areas and other properties relevant for performing tasks. We also present a controller that uses the relative position or orientation of such features as constraints for manipulation tasks in the world. Such a controller allows to easily model complex tasks like pancake flipping or sausage fishing. The extension of the proprioception is achieved by a generalized filter setup for a set of force torque sensors, that allows the detection of indirect contacts performed over a tool and extract basic information like the approximated direction from the sensor data.
更多
查看译文
关键词
multimodal autonomous tool analyses,visual features extraction,robot,visual exploration,geometric features,potential contact areas,manipulation tasks,force torque sensors
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要