Efficient Transmission And Rendering Of Rgb-D Views

Proceedings, Part I, of the 9th International Symposium on Advances in Visual Computing - Volume 8033(2013)

引用 1|浏览62
暂无评分
摘要
For the autonomous navigation of the robots in unknown environments, generation of environmental maps and 3D scene reconstruction play a significant role. Simultaneous localization and mapping (SLAM) helps the robots to perceive, plan and navigate autonomously whereas scene reconstruction helps the human supervisors to understand the scene and act accordingly during joint activities with the robots. For successful completion of these joint activities, a detailed understanding of the environment is required for human and robots to interact with each other. Generally, the robots are equipped with multiple sensors and acquire a large amount of data which is challenging to handle. In this paper we propose an efficient 3D scene reconstruction approach for such scenarios using vision and graphics based techniques. This approach can be applied to indoor, outdoor, small and large scale environments. The ultimate goal of this paper is to apply this system to joint rescue operations executed by human and robot teams by reducing a large amount of point cloud data to a smaller amount without compromising on the visual quality of the scene. From thorough experimentation, we show that the proposed system is memory and time efficient and capable to run on the processing unit mounted on the autonomous vehicle. For experimentation purposes, we use standard RGB-D benchmark dataset.
更多
查看译文
关键词
Point Cloud, Visual Quality, World Trade Center, Point Cloud Data, Robot Team
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要