rxKinFu: Moving Volume KinectFusion for 3D Perception and Robotics

Dimitrios Kanoulas, Nikos G. Tsagarakis,Marsette Vona

2018 IEEE-RAS 18TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS)(2018)

引用 3|浏览22
暂无评分
摘要
KinectFusion is an impressive algorithm that was introduced in 2011 to simultaneously track the movement of a depth camera in the 3D space and densely reconstruct the environment as a Truncated Signed Distance Formula (TSDF) volume, in real-time. In 2012, we introduced the Moving Volume KinectFusion method that allows the volume/camera move freely in the space. In this work, we further develop the Moving Volume KinectFusion method (as rxKinFu) to fit better to robotic and perception applications, especially for locomotion and manipulation tasks. We describe methods to ray cast point clouds from the volume using virtual cameras, and use the point clouds for heightmaps generation (e.g., useful for locomotion) or object dense point cloud extraction (e.g., useful for manipulation). Moreover, we present different methods for keeping the camera fixed with respect to the moving volume, fusing also IMU data and the camera heading/velocity estimation. Last, we integrate and show some demonstrations of rxKinFu on the mini-bipedal robot RPBP, our wheeled quadrupedal robot CENTAURO, and the newly developed full-size humanoid robot COMAN+. We release the code as an open-source package, using the Robotic Operating System (ROS) and the Point Cloud Library (PCL).
更多
查看译文
关键词
robotic operating system,camera heading,velocity estimation,dense point cloud extraction,virtual cameras,ray cast point clouds,robotic perception applications,volume/camera,Moving Volume KinectFusion method,Truncated Signed Distance Formula volume,depth camera,3D perception,Point Cloud Library
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要