Segment3D: Learning Fine-Grained Class-Agnostic 3D Segmentation without Manual Labels
CoRR(2023)
摘要
Current 3D scene segmentation methods are heavily dependent on manually
annotated 3D training datasets. Such manual annotations are labor-intensive,
and often lack fine-grained details. Importantly, models trained on this data
typically struggle to recognize object classes beyond the annotated classes,
i.e., they do not generalize well to unseen domains and require additional
domain-specific annotations. In contrast, 2D foundation models demonstrate
strong generalization and impressive zero-shot abilities, inspiring us to
incorporate these characteristics from 2D models into 3D models. Therefore, we
explore the use of image segmentation foundation models to automatically
generate training labels for 3D segmentation. We propose Segment3D, a method
for class-agnostic 3D scene segmentation that produces high-quality 3D
segmentation masks. It improves over existing 3D segmentation models
(especially on fine-grained masks), and enables easily adding new training data
to further boost the segmentation performance – all without the need for
manual training labels.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要