Convergence Rates of Average-Reward Multi-agent Reinforcement Learning via Randomized Linear Programming

arxiv(2021)

引用 0|浏览3
暂无评分
摘要
In tabular multi-agent reinforcement learning with average-cost criterion, a team of agents sequentially interacts with the environment and observes local incentives. We focus on the case that the global reward is a sum of local rewards, the joint policy factorizes into agents' marginals, and full state observability. To date, few global optimality guarantees exist even for this simple setting, as most results yield convergence to stationarity for parameterized policies in large/possibly continuous spaces. To solidify the foundations of MARL, we build upon linear programming (LP) reformulations, for which stochastic primal-dual methods yields a model-free approach to achieve \emph{optimal sample complexity} in the centralized case. We develop multi-agent extensions, whereby agents solve their local saddle point problems and then perform local weighted averaging. We establish that the sample complexity to obtain near-globally optimal solutions matches tight dependencies on the cardinality of the state and action spaces, and exhibits classical scalings with respect to the network in accordance with multi-agent optimization. Experiments corroborate these results in practice.
更多
查看译文
关键词
action spaces,average-cost criterion,average-reward multiagent reinforcement learning,centralized case,convergence rates,full state observability,global optimality guarantees,global reward,joint policy,linear programming,local incentives,local rewards,local saddle point problems,local weighted averaging,model-free approach,multiagent extensions,multiagent optimization,near-globally optimal solutions,optimal sample complexity,parameterized policies,randomized linear programming,state observability,stochastic primal-dual methods,tabular multiagent reinforcement
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要