Towards Understanding Linear Value Decomposition in Cooperative Multi-Agent Q-Learning

arxiv(2021)

引用 21|浏览451
暂无评分
摘要
Linear value decomposition is a widely-used structure to scale up multi-agent Q-learning algorithms in cooperative settings. To develop a deeper understanding of this popular technique, this paper provides the first theoretical analysis to characterize its internal mechanism. Our main results reveal two novel insights: (1) Linear value decomposition structure implicitly implements a classical credit assignment called difference rewards. (2) This implicit credit assignment requires on-policy data distribution to achieve numerical stability. In the empirical study, our experiments also demonstrate that most deep multi-agent Q-learning algorithms using linear value decomposition structure cannot efficiently utilize off-policy samples.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要