InstantaneousRobotSelf-LocalisationandMotionEstimationwith OmnidirectionalVision

msra(2008)

引用 23|浏览2
暂无评分
摘要
This paper presents two related methods for autonomous visual guidance of robots: localisation by trilateration, and inter-frame motion estimation. Both methods use co-axial omnidirectional stereopsis (omnistereo), which returns the range r to objects or guiding points detected in the images. The trilateration method achieves self-localisation using r from the three nearest objects at known positions. The inter-frame motion estimation is more general, being able to use any features in an unknown environment. The guiding points are detected automatically on the basis of their perceptual signicance and thus they need not have either special markings or be placed at known locations. The inter-frame motion estimation does not require previous motion history, making it well suited for detecting acceleration (in 20 th of a second) and thus supporting dynamic models of robot's motion which will gain in importance when autonomous robots achieve useful speeds. An initial estimate of the robot's rotation ! (the visual compass) is obtained from the angular optic ow in an omnidirectional image. A new non-iterative optic to estimate the translation of the robot. However, a large number of guiding points are typically detected and matched in most real images. Each such point provides a solution for the robot's translation. The solutions are combined by a robust clustering algorithm Clumat that reduces rotation and translation errors. Simulator experiments are included for all the presented methods. Real images obtained from ScitosG5 au- tonomously moving robot were used to test the inter-frame rotation and to show that the presented vision methods are applicable to real images in real robotics scenarios.
更多
查看译文
关键词
omnidirectional vision,omniow,motion estimation,self-localisation,omnistereo,simulation experiment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要