3DiffTection: 3D Object Detection with Geometry-Aware Diffusion Features
CVPR 2024(2023)
摘要
We present 3DiffTection, a state-of-the-art method for 3D object detection
from single images, leveraging features from a 3D-aware diffusion model.
Annotating large-scale image data for 3D detection is resource-intensive and
time-consuming. Recently, pretrained large image diffusion models have become
prominent as effective feature extractors for 2D perception tasks. However,
these features are initially trained on paired text and image data, which are
not optimized for 3D tasks, and often exhibit a domain gap when applied to the
target data. Our approach bridges these gaps through two specialized tuning
strategies: geometric and semantic. For geometric tuning, we fine-tune a
diffusion model to perform novel view synthesis conditioned on a single image,
by introducing a novel epipolar warp operator. This task meets two essential
criteria: the necessity for 3D awareness and reliance solely on posed image
data, which are readily available (e.g., from videos) and does not require
manual annotation. For semantic refinement, we further train the model on
target data with detection supervision. Both tuning phases employ ControlNet to
preserve the integrity of the original feature capabilities. In the final step,
we harness these enhanced capabilities to conduct a test-time prediction
ensemble across multiple virtual viewpoints. Through our methodology, we obtain
3D-aware features that are tailored for 3D detection and excel in identifying
cross-view point correspondences. Consequently, our model emerges as a powerful
3D detector, substantially surpassing previous benchmarks, e.g., Cube-RCNN, a
precedent in single-view 3D detection by 9.43\% in AP3D on the
Omni3D-ARkitscene dataset. Furthermore, 3DiffTection showcases robust data
efficiency and generalization to cross-domain data.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要