A Monocular Depth Estimation Method for Indoor-Outdoor Scenes Based on Vision Transformer

Jianghai Shuai,Ming Li, Yongkang Feng,Yang Li,Sidan Du

2023 IEEE 14th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)(2023)

引用 0|浏览6
暂无评分
摘要
In the field of computer vision, monocular depth estimation has garnered significant attention as a research direction. However, current depth estimation methods often overlook the impact of depth range variations in indoor and outdoor scenes, consequently limiting the model’s generalization ability. To achieve high-precision depth estimation across different depth ranges, we propose a new method. We employ the pretrained model Dinov2 as encoder, combined with decoder based on CNN architecture, to enhance the network’s capacity for extracting global information from indoor and outdoor scenes. Also, we design a mapping module to transform diverse depth ranges into a unified 0-1 range, which can effectively adapt to indoor and outdoor scenes. We validate our method on the DIODE dataset, which comprises mixed indoor and outdoor scenes. Experimental results demonstrate that our method achieves higher depth estimation accuracy and stronger generalization performance when dealing with scenes of diverse depth ranges.
更多
查看译文
关键词
monocular depth estimation,indoor and outdoor scenes,pretrained model,vision transformer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要