Hope-Net: A Graph-Based Model For Hand-Object Pose Estimation

2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)(2020)

引用 180|浏览113
暂无评分
摘要
Hand-object pose estimation (HOPE) aims to jointly detect the poses of both a hand and of a held object. In this paper, we propose a lightweight model called HOPE-Net which jointly estimates hand and object pose in 2D and 3D in real-time. Our network uses a cascade of two adaptive graph convolutional neural networks, one to estimate 2D coordinates of the hand joints and object corners, followed by another to convert 2D coordinates to 3D. Our experiments show that through end-to-end training of the MI network, we achieve better accuracy for both the 2D and 3D coordinate estimation problems. The proposed 2D to 3D graph convolution-based model could be applied to other 3D landmark detection problems, where it is possible to first predict the 2D keypoints and then transform them to 3D.
更多
查看译文
关键词
HOPE-net,graph-based model,hand-object pose estimation,held object,lightweight model,HOPE-Net,adaptive graph convolutional neural networks,estimate 2D coordinates,hand joints,object corners,end-to-end training,3D coordinate estimation problems,3D graph convolution-based model,3D landmark detection problems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要