Vision based robot localization by ground to satellite matching in GPS-denied situations

Intelligent Robots and Systems(2014)

引用 90|浏览11
暂无评分
摘要
This paper studies the problem of matching images captured from an unmanned ground vehicle (UGV) to those from a satellite or high-flying vehicle. We focus on situations where the UGV navigates in remote areas with few man-made structures. This is a difficult problem due to the drastic change in perspective between the ground and aerial imagery and the lack of environmental features for image comparison. We do not rely on GPS, which may be jammed or uncertain. We propose a two-step approach: (1) the UGV images are warped to obtain a bird's eye view of the ground, and (2) this view is compared to a grid of satellite locations using whole-image descriptors. We analyze the performance of a variety of descriptors for different satellite map sizes and various terrain and environment types. We incorporate the air-ground matching into a particle-filter framework for localization using the best-performing descriptor. The results show that vision-based UGV localization from satellite maps is not only possible, but often provides better position estimates than GPS estimates, enabling us to improve the location estimates of Google Street View.
更多
查看译文
关键词
Global Positioning System,artificial satellites,autonomous aerial vehicles,image matching,particle filtering (numerical methods),path planning,robot vision,GPS estimates,GPS-denied situations,Google Street View,UGV images,UGV navigation,air-ground matching,bird eye ground view,environmental features,high-flying vehicle captured images,image matching problem,man-made structures,particle-filter framework,satellite captured images,satellite locations,satellite map sizes,satellite maps,unmanned ground vehicle captured images,vision based robot localization,vision-based UGV localization,whole-image descriptors
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要