Constrained Filtering-based Fusion of Images, Events, and Inertial Measurements for Pose Estimation

2020 IEEE International Conference on Robotics and Automation (ICRA)(2020)

引用 5|浏览26
暂无评分
摘要
In this paper, we propose a novel filtering-based method that fuses events from a dynamic vision sensor (DVS), images, and inertial measurements to estimate camera poses. A DVS is a bio-inspired sensor that generates events triggered by brightness changes. It can cover the drawbacks of a conventional camera by virtual of its independent pixels and high dynamic range. Specifically, we focus on optical flow obtained from both a stream of events and intensity images in which the former is much like a differential quantity, whereas the latter is a pixel difference in a much longer time interval than events. This nature characteristic motivates us to model optical flow estimated from events directly, but feature tracks for images in the filter design. An inequality constraint is considered in our method since the inverse scene-depth is larger than zero by its definition. Furthermore, we evaluate our proposed method in the benchmark DVS dataset and a dataset collected by the authors. The results reveal that the presented algorithm has reduced the position error by 49.9% on average and comparable accuracy only using events when compared to the state-of-the-art filtering-based estimator.
更多
查看译文
关键词
constrained filtering-based fusion,inertial measurements,pose estimation,camera poses,bio-inspired sensor,brightness changes,independent pixels,high dynamic range,optical flow,intensity images,pixel difference,inverse scene-depth,DVS dataset,filtering-based estimator,dynamic vision sensor
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要