Chrome Extension
WeChat Mini Program
Use on ChatGLM

Elite360D: Towards Efficient 360 Depth Estimation Via Semantic- and Distance-Aware Bi-Projection Fusion

CVPR 2024(2024)

Cited 0|Views12
No score
Abstract
360 depth estimation has recently received great attention for 3Dreconstruction owing to its omnidirectional field of view (FoV). Recentapproaches are predominantly focused on cross-projection fusion withgeometry-based re-projection: they fuse 360 images with equirectangularprojection (ERP) and another projection type, e.g., cubemap projection toestimate depth with the ERP format. However, these methods suffer from 1)limited local receptive fields, making it hardly possible to capture large FoVscenes, and 2) prohibitive computational cost, caused by the complexcross-projection fusion module design. In this paper, we propose Elite360D, anovel framework that inputs the ERP image and icosahedron projection (ICOSAP)point set, which is undistorted and spatially continuous. Elite360D is superiorin its capacity in learning a representation from a local-with-globalperspective. With a flexible ERP image encoder, it includes an ICOSAP pointencoder, and a Bi-projection Bi-attention Fusion (B2F) module (totally  1Mparameters). Specifically, the ERP image encoder can take various perspectiveimage-trained backbones (e.g., ResNet, Transformer) to extract local features.The point encoder extracts the global features from the ICOSAP. Then, the B2Fmodule captures the semantic- and distance-aware dependencies between eachpixel of the ERP feature and the entire ICOSAP feature set. Without specificbackbone design and obvious computational cost increase, Elite360D outperformsthe prior arts on several benchmark datasets.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined