Multimodal 3D Human Pose Estimation from a Single Image

2019 International Conference on 3D Vision (3DV)(2019)

引用 1|浏览22
暂无评分
摘要
In this paper, we propose a method for estimating 3D human pose from a single RGB image. Compared to methods that either provide point estimates for coordinate regression or unimodal predictions of joint locations, our approach predicts joint locations using multimodal distributions. In addition, we apply a data-driven approach to learn the conditional dependencies of the relative positions of joints. Our end-to-end approach takes as input images with either 2D or 3D labels and performs on par or better than the state-of-the-art on the Human3.6M and MPII datasets.
更多
查看译文
关键词
MDN,mixture density network,Human3.6M,deep convolutional networks,deep learning,multimodal distribution,multiple hypotheses,CNN
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要