Visual-Inertial Teach And Repeat Powered By Google Tango

2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)(2018)

引用 6|浏览10
暂无评分
摘要
Many industrial facilities require periodic visual inspections. Often the points of interest are out of reach or in potentially hazardous environment. Multi-copters are ideal platforms to automate this expensive and tedious task. This video presents a system that enables a human operator to teach a visual inspection task to an autonomous aerial vehicle by simply demonstrating the task using a tablet. The system employs the Google Tango visual-inertial mapping framework as the only source of pose estimates, thus enabling operation in GPS-denied environments. In a first step the operator records the desired inspection path using the tablet. Inspection points are automatically inserted if the operator pauses, holding a viewpoint. The mapping framework then computes a feature-based localization map, which is shared with the robot. After take-off, the robot estimates its pose based on this map and plans a smooth trajectory through the way points defined by the operator. Furthermore, the system is able to track the global pose of other robots or the operator, localized in the same map, and follow them in real-time, while avoiding collision. This was demonstrated in the second part of the video, where the robot is following the operator in real-time through a hedge maze.
更多
查看译文
关键词
human operator,visual inspection task,autonomous aerial vehicle,Google Tango visual-inertial mapping framework,pose estimates,GPS-denied environments,inspection points,feature-based localization map,industrial facilities,multicopters,visual-inertial teach,hedge maze
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要