Adaptive Leader-Follower Formation Control And Obstacle Avoidance Via Deep Reinforcement Learning

2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)(2019)

引用 16|浏览1
暂无评分
摘要
We propose a deep reinforcement learning (DRL) methodology for the tracking, obstacle avoidance, and formation control of nonholonomic robots. By separating vision-based control into a perception module and a controller module, we can train a DRL agent without sophisticated physics or 3D modeling. In addition, the modular framework averts daunting retrains of an image-to-action end-to-end neural network, and provides flexibility in transferring the controller to different robots. First, we train a convolutional neural network (CNN) to accurately localize in an indoor setting with dynamic foreground/background. Then, we design a new DRL algorithm named Momentum Policy Gradient (MPG) for continuous control tasks and prove its convergence. We also show that MPG is robust at tracking varying leader movements and can naturally be extended to problems of formation control. Leveraging reward shaping, features such as collision and obstacle avoidance can be easily integrated into a DRL controller.
更多
查看译文
关键词
convolutional neural network,DRL algorithm,continuous control tasks,tracking varying leader movements,obstacle avoidance,DRL controller,adaptive leader-follower formation control,deep reinforcement learning methodology,nonholonomic robots,vision-based control,perception module,controller module,DRL agent,image-to-action end-to-end neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要