Learning Deep Generative Spatial Models for Mobile Robots

2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2017)

引用 50|浏览60
暂无评分
摘要
We propose a new probabilistic framework that allows mobile robots to autonomously learn deep, generative models of their environments that span multiple levels of abstraction. Unlike traditional approaches that combine engineered models for low-level features, geometry, and semantics, our approach leverages recent advances in Sum-Product Networks (SPNs) and deep learning to learn a single, universal model of the robot's spatial environment. Our model is fully probabilistic and generative, and represents a joint distribution over spatial information ranging from low-level geometry to semantic interpretations. Once learned, it is capable of solving a wide range of tasks: from semantic classification of places, uncertainty estimation, and novelty detection, to generation of place appearances based on semantic information and prediction of missing data in partial observations. Experiments on laser-range data from a mobile robot show that the proposed universal model obtains performance superior to state-of-the-art models fine-tuned to one specific task, such as Generative Adversarial Networks (GANs) or SVMs.
更多
查看译文
关键词
single model,spatial information,mobile robot,learning deep generative spatial,sum-product networks,deep learning,low-level features,generative models,deep models,probabilistic framework,mobile robots,deep generative spatial models,semantic information,semantic interpretations,low-level geometry
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要