Differentiable Diffusion for Dense Depth Estimation from Multi-view Images

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021(2021)

引用 8|浏览36
暂无评分
摘要
We present a method to estimate dense depth by optimizing a sparse set of points such that their diffusion into a depth map minimizes a multi-view repmjection error from RGB supervision. We optimize point positions, depths, and weights with respect to the loss by differential splatting that models points as Gaussians with analytic transmittance. Further; we develop an efficient optimization routine that can simultaneously optimize the 50k+ points required for complex scene reconstruction. We validate our routine using ground truth data and show high reconstruction quality. Then, we apply this to light field and wider baseline images via self supervision, and show improvements in both average and outlier error for depth maps diffused from inaccurate sparse points. Finally, we compare qualitative and quantitative results to image processing and deep learning methods.
更多
查看译文
关键词
differentiable diffusion,dense depth estimation,multiview images,multiview reprojection error,RGB supervision,point positions,differential splatting,analytic transmittance,efficient optimization routine,complex scene reconstruction,ground truth data,self supervision,average error,outlier error,depth maps,sparse points,image processing,deep learning methods,depth map minimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要