Deep eyes: Joint depth inference using monocular and binocular cues.

Zhang Chen, Xinqing Guo,Siyuan Li, Yang Yang,Jingyi Yu

Neurocomputing(2021)

引用 3|浏览50
暂无评分
摘要
Human visual system relies on both monocular focusness cues and binocular stereo cues to gain effective 3D perception. Correspondingly, depth from focus/defocus (DfF/DfD) and stereo matching are two most studied passive depth sensing schemes, which are traditionally solved in separate tracks. However, the two techniques are essentially complementary: the monocular cue from DfF/DfD can robustly handle repetitive textures and occlusion that are problematic for stereo matching whereas the binocular cue from stereo matching is insensitive to defocus blurs and can resolve large depth range. In this paper, we emulate human perception and present unified learning-based techniques to conduct hybrid DfF/DfD and stereo matching. We first construct a comprehensive focal stack dataset synthesized by depth-guided light field rendering. Next, we propose different network architectures to suit various inputs, including focal stack, stereo image pair, binocular focal stack, a focus-defocus image pair and defocus-stereo image triplet. We also exploit different connection methods between the separate networks for integrating them into an optimized solution to produce high fidelity disparity maps. For experiment, we further explore different hardware setup to capture both monocular and binocular depth cues. Results show that our new learning-based hybrid techniques can significantly improve accuracy and robustness in depth estimation.
更多
查看译文
关键词
Depth from Focus,Depth from Defocus,Stereo Matching,Deep Learning,Light Field
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要