Multi-Robot Collision Avoidance with Map-based Deep Reinforcement Learning

2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI)(2020)

引用 6|浏览36
暂无评分
摘要
Multi-robot collision avoidance in a communication-free environment is one of the key issues for mobile robotics and autonomous driving. In this paper, we propose a map-based deep reinforcement learning (DRL) approach for collision avoidance of multiple robots, where robots do not communicate with each other and only sense other robots' positions and the obstacles around them. We use the egocentric grid map of a robot to represent the environmental information around it, which can be easily generated by using multiple sensors or sensor fusion. The learned policy generated from the DRL model directly maps 3 frames of egocentric grid maps and the robot's relative local goal positions into low-level robot control commands. We first train a convolutional neural network for the navigation policy in a simulator of multiple mobile robots using proximal policy optimization (PPO). Then we deploy the trained model to real robots to perform collision avoidance in their navigation. We evaluate the approach with various scenarios both in the simulator and on three differential-drive mobile robots in the real world. Both qualitative and quantitative experiments show that our approach is efficient with a high success rate. The demonstration video can be found at https://youtu.be/jcLKlEXuFuk.
更多
查看译文
关键词
multi-robots collision avoidance,reinforcement learning,egocentric grid map
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要