JM3D JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues
CoRR(2023)
Abstract
The rising importance of 3D understanding, pivotal in computer vision,
autonomous driving, and robotics, is evident. However, a prevailing trend,
which straightforwardly resorted to transferring 2D alignment strategies to the
3D domain, encounters three distinct challenges: (1) Information Degradation:
This arises from the alignment of 3D data with mere single-view 2D images and
generic texts, neglecting the need for multi-view images and detailed
subcategory texts. (2) Insufficient Synergy: These strategies align 3D
representations to image and text features individually, hampering the overall
optimization for 3D models. (3) Underutilization: The fine-grained information
inherent in the learned representations is often not fully exploited,
indicating a potential loss in detail. To address these issues, we introduce
JM3D, a comprehensive approach integrating point cloud, text, and image. Key
contributions include the Structured Multimodal Organizer (SMO), enriching
vision-language representation with multiple views and hierarchical text, and
the Joint Multi-modal Alignment (JMA), combining language understanding with
visual representation. Our advanced model, JM3D-LLM, marries 3D representation
with large language models via efficient fine-tuning. Evaluations on ModelNet40
and ScanObjectNN establish JM3D's superiority. The superior performance of
JM3D-LLM further underscores the effectiveness of our representation transfer
approach. Our code and models are available at https://github.com/Mr-Neko/JM3D.
MoreTranslated text
Key words
elevating jm3d representation,d-llm,multi-modal
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined