Gaussian Mixture Models For Temporal Depth Fusion

2017 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2017)(2017)

引用 5|浏览44
暂无评分
摘要
Sensing the 3D environment of a moving robot is essential for collision avoidance. Most 3D sensors produce dense depth maps, which are subject to imperfections due to various environmental factors. Temporal fusion of depth maps is crucial to overcome those. Temporal fusion is traditionally done in 3D space with voxel data structures, but it can be approached by temporal fusion in image space, with potential benefits in reduced memory and computational cost for applications like reactive collision avoidance for micro air vehicles. In this paper, we present an efficient Gaussian Mixture Models based depth map fusion approach, introducing an online update scheme for dense representations. The environment is modeled from an ego-centric point of view, where each pixel is represented by a mixture of Gaussian inverse-depth models. Consecutive frames are related to each other by transformations obtained from visual odometry. This approach achieves better accuracy than alternative image space depth map fusion techniques at lower computational cost.
更多
查看译文
关键词
Gaussian mixture models,temporal depth fusion,depth map fusion approach,Gaussian inverse-depth models,visual odometry,computational cost,obstacle detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要