Where Should I Walk? Predicting Terrain Properties From Images Via Self-Supervised Learning

IEEE ROBOTICS AND AUTOMATION LETTERS(2019)

引用 156|浏览172
暂无评分
摘要
Legged robots have the potential to traverse diverse and rugged terrain. To find a safe and efficient navigation path and to carefully select individual footholds, it is useful to be able to predict properties of the terrain ahead of the robot. In this letter, we propose a method to collect data from robot-terrain interaction and associate it to images. Using sparse data acquired in teleoperation experiments with a quadrupedal robot, we train a neural network to generate a dense prediction of the terrain properties in front of the robot. To generate training data, we project the foothold positions from the robot trajectory into on-board camera images. We then attach labels to these footholds by identifying the dominant features of the force-torque signal measured with sensorized feet. We show that data collected in this fashion can be used to train a convolutional network for terrain property prediction as well as weakly supervised semantic segmentation. Finally, we show that the predicted terrain properties can be used for autonomous navigation of the ANYmal quadruped robot.
更多
查看译文
关键词
Semantic scene understanding, visual-based navigation, visual learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要