Meta Proximal Policy Optimization for Cooperative Multi-Agent Continuous Control

IEEE International Joint Conference on Neural Network (IJCNN)(2022)

引用 0|浏览9
暂无评分
摘要
In this paper we propose Multi-Agent Proxy Proximal Policy Optimization (MA3PO), a novel multi-agent deep reinforcement learning algorithm that tackles the challenge of cooperative continuous multi-agent control. Our method is driven by the observation that most existing multi-agent reinforcement learning algorithms mainly focus on discrete state/action spaces and are thus computationally infeasible when extended to environments with continuous state/action spaces. To address the issue of computational complexity and to better model intra-agent collaboration, we make use of the recently successful Proximal Policy Optimization algorithm that effectively explores of continuous action spaces, and incorporate the notion of intrinsic motivation via meta-gradient methods so as to stimulate the behavior of individual agents in cooperative multi-agent settings. Towards these ends, we design proxy rewards to quantify the effect of individual agent-level intrinsic motivation onto the team-level reward, and apply meta-gradient methods to leverage such an addition so that our algorithm can learn the team-level cumulative reward effectively. Experiments on various multi-agent reinforcement learning benchmark environments with continuous action spaces demonstrate that our algorithm is not only comparable with the existing state-of-the-art benchmarks, but also significantly reduces training time complexity.
更多
查看译文
关键词
Multi-agent Reinforcement Learning,Intrinsic Motivation,Continuous Control
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要