Self-Aligning Depth-regularized Radiance Fields for Asynchronous RGB-D Sequences
arxiv(2022)
摘要
It has been shown that learning radiance fields with depth rendering and
depth supervision can effectively promote the quality and convergence of view
synthesis. However, this paradigm requires input RGB-D sequences to be
synchronized, hindering its usage in the UAV city modeling scenario. As there
exists asynchrony between RGB images and depth images due to high-speed flight,
we propose a novel time-pose function, which is an implicit network that maps
timestamps to SE(3) elements. To simplify the training process, we also
design a joint optimization scheme to jointly learn the large-scale
depth-regularized radiance fields and the time-pose function. Our algorithm
consists of three steps: (1) time-pose function fitting, (2) radiance field
bootstrapping, (3) joint pose error compensation and radiance field refinement.
In addition, we propose a large synthetic dataset with diverse controlled
mismatches and ground truth to evaluate this new problem setting
systematically. Through extensive experiments, we demonstrate that our method
outperforms baselines without regularization. We also show qualitatively
improved results on a real-world asynchronous RGB-D sequence captured by drone.
Codes, data, and models will be made publicly available.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要