谷歌浏览器插件
订阅小程序
在清言上使用

Active Visuo-Tactile Point Cloud Registration for Accurate Pose Estimation of Objects in an Unknown Workspace

2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2021)

引用 11|浏览9
暂无评分
摘要
This paper proposes a novel active visuo-tactile based methodology wherein the accurate estimation of the time-invariant SE(3) pose of objects is considered for autonomous robotic manipulators. The robot equipped with tactile sensors on the gripper is guided by a vision estimate to actively explore and localize the objects in the unknown workspace. The robot is capable of reasoning over multiple potential actions, and execute the action to maximize information gain to update the current belief of the object. We formulate the pose estimation process as a linear translation invariant quaternion filter (TIQF) by decoupling the estimation of translation and rotation and formulating the update and measurement model in linear form. We perform pose estimation sequentially on acquired measurements using very sparse point cloud (≤ 15 points) as acquiring each measurement using tactile sensing is time consuming. Furthermore, our proposed method is computationally efficient to perform an exhaustive uncertainty-based active touch selection strategy in real-time without the need for trading information gain with execution time. We evaluated the performance of our approach extensively in simulation and by a robotic system.
更多
查看译文
关键词
linear translation invariant quaternion filter,sparse point cloud,tactile sensing,information gain,active visuo-tactile point cloud registration,unknown workspace,active visuo-tactile based methodology,autonomous robotic manipulators,tactile sensors,pose estimation process,uncertainty-based active touch selection strategy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要