TMSDNet: Transformer with multi-scale dense network for single and multi-view 3D reconstruction

COMPUTER ANIMATION AND VIRTUAL WORLDS(2024)

引用 0|浏览0
暂无评分
摘要
3D reconstruction is a long-standing problem. Recently, a number of studies have emerged that utilize transformers for 3D reconstruction, and these approaches have demonstrated strong performance. However, transformer-based 3D reconstruction methods tend to establish the transformation relationship between the 2D image and the 3D voxel space directly using transformers or rely solely on the powerful feature extraction capabilities of transformers. They ignore the crucial role played by deep multi-scale representation of the object in the voxel feature domain, which can provide extensive global shape and local detail information about the object in a multi-scale manner. In this article, we propose a novel framework TMSDNet (transformer with multi-scale dense network) for single-view and multi-view 3D reconstruction with transformer to solve this problem. Based on our well-designed combined-transformer Block, which is canonical encoder-decoder architecture, voxel features with spatial order can be extracted from the input image, which are used to further extract multi-scale global features in parallel using a multi-scale residual attention module. Furthermore, a residual dense attention block is introduced for deep local features extraction and adaptive fusion. Finally, the reconstructed objects are produced with the voxel reconstruction block. Experiment results on the benchmarks such as ShapeNet and Pix3D datasets demonstrate that TMSDNet outperforms the existing state-of-the-art reconstruction methods substantially.
更多
查看译文
关键词
deep learning,multi-scale,single-view and multi-view 3D reconstruction,transformer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要