Stochastic Proximal Gradient Consensus Over Time-Varying Networks

2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2016)

引用 4|浏览52
暂无评分
摘要
We consider solving a convex, nonsmooth and stochastic optimization problem over a multi-agent network. Each agent has access to a local objective function and can communicate with its immediate neighbors only. We develop a dynamic stochastic proximal-gradient consensus (DySPGC) algorithm, featuring: i) it works for both the static and randomly time-varying networks; ii) it can deal with either the exact or the stochastic gradient information; iii) it has provable rate of convergence. Interestingly, the developed algorithm includes as special cases many existing (and seemingly unrelated) first-order algorithms for distributed optimization over static networks, such as the EXTRA (Shi et al 2014), the PG-EXTRA (Shi at 2015), the IC/IDC-ADMM (Chang et al 2014), and the DLM (Ling et al 2015). It is also closely related to the classical distributed gradient method.
更多
查看译文
关键词
Consensus optimization,alternating direction method of multipliers,stochastic optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要