Integrating Self-Organizing Neural Network And Motivated Learning For Coordinated Multi-Agent Reinforcement Learning In Multi-Stage Stochastic Game

PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2014)

引用 26|浏览31
暂无评分
摘要
Most non-trivial problems require the coordinated performance of multiple goal-oriented and time-critical tasks. Coordinating the performance of the tasks is required due to the dependencies among the tasks and the sharing of resources. In this work, an agent learns to perform a task using reinforcement learning with a self-organizing neural network as the function approximator. We propose a novel coordination strategy integrating Motivated Learning (ML) and a self-organizing neural network for multi-agent reinforcement learning (MARL). Specifically, we adapt the ML idea of using pain signal to overcome the resource competition issue. Dependency among the agents is resolved using domain knowledge of their dependence. To avoid domineering agents, the task goals are staggered over multiple stages. A stage is completed by attaining a particular combination of task goals. Results from our experiments conducted using a popular PC-based game known as Starcraft Broodwar show goals of multiple tasks can be attained efficiently using our proposed coordination strategy.
更多
查看译文
关键词
function approximation,learning (artificial intelligence),multi-agent systems,self-organising feature maps,stochastic games,MARL,ML,PC-based game,Starcraft Broodwar,coordinated multiagent reinforcement learning,coordination strategy,domain knowledge,function approximator,motivated learning,multistage stochastic game,pain signal,resource competition,self-organizing neural network,
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要