Autonomous Assistance for Versatile Grasping with Rescue Robots

2019 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)(2019)

引用 5|浏览4
暂无评分
摘要
The deployment of mobile robots in urban search and rescue (USAR) scenarios often requires manipulation abilities, for example, for clearing debris or opening a door. Conventional teleoperated control of mobile manipulator arms with a high number of degrees of freedom in unknown and unstructured environments is highly challenging and error-prone. Thus, flexible semi-autonomous manipulation capabilities promise valuable support to the operator and possibly also prevent failures during missions. However, most existing approaches are not flexible enough as, e.g., they either assume a-priori known objects or object classes or require manual selection of grasp poses. In this paper, an approach is presented that combines a segmented 3D model of the scene with grasp pose detection. It enables grasping arbitrary rigid objects based on a geometric segmentation approach that divides the scene into objects. Antipodal grasp candidates sampled by the grasp pose detection are ranked to ensure a robust grasp. The human remotely operating the robot is able to control the grasping process using two short interactions in the user interface. Our real robot experiments demonstrate the capability to grasp various objects in cluttered environments.
更多
查看译文
关键词
autonomous assistance,versatile grasping,rescue robots,mobile robots,USAR,mobile manipulator arms,semiautonomous manipulation capabilities,segmented 3D model,antipodal grasp candidates,grasping process,robot experiments,user interface,teleoperation control,geometric segmentation,urban search and rescue
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要