Any2Point: Empowering Any-modality Large Models for Efficient 3D Understanding
arxiv(2024)
摘要
Large foundation models have recently emerged as a prominent focus of
interest, attaining superior performance in widespread scenarios. Due to the
scarcity of 3D data, many efforts have been made to adapt pre-trained
transformers from vision to 3D domains. However, such 2D-to-3D approaches are
still limited, due to the potential loss of spatial geometries and high
computation cost. More importantly, their frameworks are mainly designed for 2D
models, lacking a general any-to-3D paradigm. In this paper, we introduce
Any2Point, a parameter-efficient method to empower any-modality large models
(vision, language, audio) for 3D understanding. Given a frozen transformer from
any source modality, we propose a 3D-to-any (1D or 2D) virtual projection
strategy that correlates the input 3D points to the original 1D or 2D positions
within the source modality. This mechanism enables us to assign each 3D token
with a positional encoding paired with the pre-trained model, which avoids 3D
geometry loss caused by the true projection and better motivates the
transformer for 3D learning with 1D/2D positional priors. Then, within each
transformer block, we insert an any-to-3D guided adapter module for
parameter-efficient fine-tuning. The adapter incorporates prior spatial
knowledge from the source modality to guide the local feature aggregation of 3D
tokens, compelling the semantic adaption of any-modality transformers. We
conduct extensive experiments to showcase the effectiveness and efficiency of
our method. Code and models are released at
https://github.com/Ivan-Tang-3D/Any2Point.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要