Efficient Cooperation Strategy Generation in Multi-Agent Video Games via Hypergraph Neural Network

ArXiv(2022)

引用 0|浏览2
暂无评分
摘要
The performance of deep reinforcement learning (DRL) in single-agent video games is astounding due to its benefits in dealing with sequential decision-making challenges. However, researchers have extra difficulties while working with video games in multi-agent environments. One of the most pressing issues presently being addressed is how to create sufficient collaboration between different agents in a scenario with numerous agents. To address this issue, we propose a novel algorithm based on the actor-critic method, which adapts the hypergraph structure of agents and employs hypergraph convolution to complete information feature extraction and representation between agents, resulting in efficient collaboration. Based on distinct generating methods of hypergraph structure, HGAC and ATT-HGAC algorithms are given. We demonstrate the advantages of our approach over other existing methods. Ablation and visualization studies also confirm the relevance of each component of the algorithm.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要