3D Lidar Reconstruction with Probabilistic Depth Completion for Robotic Navigation.

IEEE/RJS International Conference on Intelligent RObots and Systems (IROS)(2022)

引用 1|浏览12
暂无评分
摘要
Safe motion planning in robotics requires planning into space which has been verified to be free of obstacles. However, obtaining such environment representations using lidars is challenging by virtue of the sparsity of their depth measurements. We present a learning-aided 3D lidar reconstruction framework that upsamples sparse lidar depth measurements with the aid of overlapping camera images so as to generate denser reconstructions with more definitively free space than can be achieved with the raw lidar measurements alone. We use a neural network with an encoder-decoder structure to predict dense depth images along with depth uncertainty estimates which are fused using a volumetric mapping system. We conduct experiments on real-world outdoor datasets captured using a handheld sensing device and a legged robot. Using input data from a 16-beam lidar mapping a building network, our experiments showed that the amount of estimated free space was increased by more than 40% with our approach. We also show that our approach trained on a synthetic dataset generalises well to real-world outdoor scenes without additional fine-tuning. Finally, we demonstrate how motion planning tasks can benefit from these denser reconstructions.
更多
查看译文
关键词
16-beam lidar mapping,definitively free space,dense depth images,denser reconstructions,depth uncertainty estimates,encoder-decoder structure,environment representations,estimated free space,learning-aided 3D lidar reconstruction framework,legged robot,lidars,motion planning tasks,neural network,overlapping camera images,probabilistic depth completion,raw lidar measurements,real-world outdoor datasets,robotic navigation,robotics,safe motion planning,sparse lidar depth measurements,volumetric mapping system
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要