Local Supports Global: Deep Camera Relocalization With Sequence Enhancement

2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019)(2019)

引用 63|浏览1
暂无评分
摘要
We propose to leverage the local information in image sequences to support global camera relocalization. In contrast to previous methods that regress global poses from single images, we exploit the spatial-temporal consistency in sequential images to alleviate uncertainty due to visual ambiguities by incorporating a visual odometry (VO) component. Specifically, we introduce two effective steps called content-augmented pose estimation and motion-based refinement. The content-augmentation step focuses on alleviating the uncertainty of pose estimation by augmenting the observation based on the co-visibility in local maps built by the VO stream. Besides, the motion-based refinement is formulated as a pose graph, where the camera poses are further optimized by adopting relative poses provided by the VO component as additional motion constraints. Thus, the global consistency can be guaranteed. Experiments on the public indoor 7-Scenes and outdoor Oxford RobotCar benchmark datasets demonstrate that benefited from local information inherent in the sequence, our approach outperforms state-of-the-art methods, especially in some challenging cases, e.g., insufficient texture, highly repetitive textures, similar appearances, and over-exposure.
更多
查看译文
关键词
local supports global,sequence enhancement,local information,image sequence,global camera relocalization,global poses,spatial-temporal consistency,sequential images,visual ambiguities,visual odometry component,effective steps,motion-based refinement,content-augmentation step,pose estimation,local maps,VO stream,pose graph,camera poses,relative poses,VO component,additional motion constraints,global consistency,public indoor 7-Scenes,outdoor Oxford RobotCar benchmark datasets
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要