Convergence of heterogeneous distributed learning in stochastic routing games

2015 53RD ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON)(2015)

引用 10|浏览19
暂无评分
摘要
We study convergence properties of distributed learning dynamics in repeated stochastic routing games. The game is stochastic in that each player observes a stochastic vector, the conditional expectation of which is equal to the true loss (almost surely). In particular, we propose a model in which every player m follows a stochastic mirror descent dynamics with Bregman divergence D ψm and learning rates η t m = θ m t -αm . We prove that if all players use the same sequence of learning rates, then their joint strategy converges almost surely to the equilibrium set. If the learning dynamics are heterogeneous, that is, different players use different learning rates, then the joint strategy converges to equilibrium in expectation, and we give upper bounds on the convergence rate. This result holds for general routing games (no smoothness or strong convexity assumptions are required). These results provide a distributed learning model that is robust to measurement noise and other stochastic perturbations, and allows flexibility in the choice of learning algorithm of each player. The results also provide estimates of convergence rates, which are confirmed in simulation.
更多
查看译文
关键词
heterogeneous distributed learning convergence,convergence properties,distributed learning dynamics,stochastic routing games,stochastic vector,stochastic mirror descent dynamics,Bregman divergence,convexity assumptions,distributed learning model,measurement noise,stochastic perturbations,learning algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要