Chrome Extension
WeChat Mini Program
Use on ChatGLM

Pixels, Voxels, and Views: A Study of Shape Representations for Single View 3D Object Shape Prediction

arXiv (Cornell University)(2018)

Cited 117|Views88
No score
Abstract
The goal of this paper is to compare surface-based and volumetric 3D object shape representations, as well as viewer-centered and object-centered reference frames for single-view 3D shape prediction. We propose a new algorithm for predicting depth maps from multiple viewpoints, with a single depth or RGB image as input. By modifying the network and the way models are evaluated, we can directly compare the merits of voxels vs. surfaces and viewer-centered vs. object-centered for familiar vs. unfamiliar objects, as predicted from RGB or depth images. Among our findings, we show that surface-based methods outperform voxel representations for objects from novel classes and produce higher resolution outputs. We also find that using viewer-centered coordinates is advantageous for novel objects, while object-centered representations are better for more familiar objects. Interestingly, the coordinate frame significantly affects the shape representation learned, with object-centered placing more importance on implicitly recognizing the object category and viewer-centered producing shape representations with less dependence on category recognition.
More
Translated text
Key words
single view 3D object shape prediction,volumetric 3D object shape representations,object-centered reference frames,single-view 3D shape prediction,depth maps,RGB image,depth images,surface-based methods outperform voxel representations,viewer-centered coordinates,object-centered representations,category recognition,shape representation learning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined