Localization for Ground Robots: On Manifold Representation, Integration, Re-Parameterization, and Optimization

arxiv(2019)

引用 7|浏览13
暂无评分
摘要
In this paper, we focus on localizing ground robots, by probabilistically fusing measurements from the wheel odometry and a monocular camera. For ground robots, the wheel odometry is widely used in localization tasks, especially in applications under planar-scene based environments. However, since the wheel odometry only provides 2D motion estimates, it is extremely challenging to use that for performing accurate full 6D pose (3D position and 3D rotation) estimation. Traditional methods on 6D localization either approximate sensor or motion models, at a cost of accuracy reduction, or rely on other sensors, e.g., inertial measurement unit (IMU), to obtain full 6D motion. By contrast, in this paper, we propose a novel probabilistic framework that is able to use the wheel odometry measurements for high-precision 6D pose estimation, in which only the wheel odometry and a monocular camera are mandatory. Specifically, we propose novel methods for i) formulating a motion manifold by parametric representation, ii) performing manifold based 6D integration with the wheel odometry measurements, and iii) re-parameterizing manifold equations periodically for error reduction. Finally, we propose a complete localization algorithm based on a manifold-assisted sliding-window estimator, fusing measurements from the wheel odometry, a monocular camera, and optionally an IMU. By extensive simulated and real-world experiments, we show that the proposed algorithm outperforms a number of state-of-the-art vision based localization algorithms by a significant margin, especially when deployed in large-scale complicated environments.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要