Human-to-Robot Handovers via Real-time Video Segmentation.

2023 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM)(2023)

引用 0|浏览2
暂无评分
摘要
Object handover in human-robot collaboration is a complex task with challenges ranging from perception to robot movement and grasping planning. In this paper, we propose a real-time method for human-to-robot object handover that addresses the challenges of feasibility, safety, and dynamic adjustment. We use 6-DOF grasping pose detection in space to calculate a reasonable grasping scheme from the input object point cloud extracted from an RGB-D camera. To account for dynamic adjustment, we adjust the grasping pose in real-time for slight wobbles of the object and human hand. We also perform collision detection to ensure safety during grasping pose estimation. Our proposed system splits the human and object components from the video input, and uses point cloud processing and segmentation to extract the necessary features for the task. The system is evaluated on a real-world platform with different objects and scenarios, and the results demonstrate the effectiveness and efficiency of our approach.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要