Towards Visual Ego-motion Learning in Robots

2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2017)

引用 49|浏览37
暂无评分
摘要
Many model-based Visual Odometry (VO) algorithms have been proposed in the past decade, often restricted to the type of camera optics, or the underlying motion manifold observed. We envision robots to be able to learn and perform these tasks, in a minimally supervised setting, as they gain more experience. To this end, we propose a fully trainable solution to visual ego-motion estimation for varied camera optics. We propose a visual ego-motion learning architecture that maps observed optical flow vectors to an ego-motion density estimate via a Mixture Density Network (MDN). By modeling the architecture as a Conditional Variational Autoencoder (C-VAE), our model is able to provide introspective reasoning and prediction for ego-motion induced scene-flow. Additionally, our proposed model is especially amenable to bootstrapped ego-motion learning in robots where the supervision in ego-motion estimation for a particular camera sensor can be obtained from standard navigation-based sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through experiments, we show the utility of our proposed approach in enabling the concept of self-supervised learning for visual ego-motion estimation in autonomous robots.
更多
查看译文
关键词
robots,Visual Odometry algorithms,underlying motion manifold,visual ego-motion estimation,varied camera optics,visual ego-motion learning architecture,optical flow vectors,ego-motion density estimate,ego-motion induced scene-flow,self-supervised learning,VO,Conditional Variational Autoencoder,C-VAE,Mixture Density Network,MDN,introspective reasoning,ego-motion estimation,autonomous robots,GPS,INS,wheel-odometry fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要