Antipodal-points-aware dual-decoding network for robotic visual grasp detection oriented to multi-object clutter scenes.

Expert Syst. Appl.(2023)

引用 1|浏览2
暂无评分
摘要
It is challenging for robots to detect grasps with high accuracy and efficiency-oriented to multi-object clutter scenes, especially scenes with objects of large-scale differences. Effective grasping representation, full utilization of data, and formulation of grasping strategies are critical to solving the problem. To this end, this paper proposes an antipodal-points grasping representation model. Based on this, the Antipodal-Points-aware Dual-decoding Network (APDNet) is presented for grasping detection in multi-object scenes. APDNet employs an encoding–decoding architecture. The shared encoding strategy based on an Adaptive Gated Fusion Module (AGFM) is proposed in the encoder to fuse RGB-D multimodal data. Two decoding branches, namely StartpointNet and EndpointNet, are presented to detect antipodal points. To better focus on objects at different scales in multi-object scenes, a global multi-view cumulative attention mechanism, called Global Accumulative Attention Mechanism (GAAM), is also designed in this paper for StartpointNet. The proposed method is comprehensively validated and compared using a public dataset and real robot platform. On the GraspNet-1Billion dataset, the proposed method achieves 30.7%, 26.4%, and 12.7% accuracy at a speed of 88.4 FPS for seen, unseen, and novel objects, respectively. On the AUBO robot platform, the detection and grasp success rates are 100.0% and 95.0% on single-object scenes and 97.0% and 90.3% on multi-object scenes, respectively. It is demonstrated that the proposed method exhibits state-of-the-art performance with well-balanced accuracy and efficiency.
更多
查看译文
关键词
Robotic grasping detection,Grasping representation,RGB-D fusion,Multi-object scene
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要