Unifying 3D Vision-Language Understanding via Promptable Queries
CoRR(2024)
摘要
A unified model for 3D vision-language (3D-VL) understanding is expected to
take various scene representations and perform a wide range of tasks in a 3D
scene. However, a considerable gap exists between existing methods and such a
unified model, due to the independent application of representation and
insufficient exploration of 3D multi-task training. In this paper, we introduce
PQ3D, a unified model capable of using Promptable Queries to tackle a wide
range of 3D-VL tasks, from low-level instance segmentation to high-level
reasoning and planning. This is achieved through three key innovations: (1)
unifying various 3D scene representations (i.e., voxels, point clouds,
multi-view images) into a shared 3D coordinate space by segment-level grouping,
(2) an attention-based query decoder for task-specific information retrieval
guided by prompts, and (3) universal output heads for different tasks to
support multi-task training. Tested across ten diverse 3D-VL datasets, PQ3D
demonstrates impressive performance on these tasks, setting new records on most
benchmarks. Particularly, PQ3D improves the state-of-the-art on ScanNet200 by
1.8
Scan2Cap by 13.4
individual or combined forms of available 3D representations, e.g., solely
voxel input.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要