Learning to predict grasping interaction with geometry-aware 3 D representations

semanticscholar(2010)

引用 0|浏览0
暂无评分
摘要
Learning object interaction is an essential problem in artificial intelligence that involves perception, motion planning and control. In this paper we present our results on the problem of grasp prediction from a single-view RGBD as well as the camera view matrix. We show that learning geometry is at the heart of this type of interaction and propose a geometry-aware grasping procedure with which first we predict a 3D volumetric representation of an object from an image, and then use this together with the image and a grasp pose proposal to predict the grasp success/failure (see Figure 1). We compare our results with a vanilla approach where outcome is a high-order mapping from image and action [3, 4, 2, 1] (see Figure 2a and b).
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要