Distributed Non-Convex First-Order Optimization and Information Processing: Lower Complexity Bounds and Rate Optimal Algorithms.

2018 52nd Asilomar Conference on Signals, Systems, and Computers(2019)

引用 70|浏览121
暂无评分
摘要
Consider a distributed non-convex optimization problem, in which a number of agents connected by a network $\mathcal{G}$ collectively optimize a sum of smooth and non-convex local objective functions. We address the following important question: For a class of unconstrained problems, if only local gradient information is available, what is the fastest rate that distributed algorithms can achieve, and how to achieve those rates. We perform a lower bound analysis for a class of first-order distributed methods which only utilizes local gradient information. We show that in the worst-case it takes at least $\mathcal{O}(1/\sqrt{\xi(\mathcal{G})}\times L/\epsilon)$ iterations to achieve certain $\epsilon$-solution, where $\xi(\mathcal{G})$ represents the spectrum gap of the graph Laplacian matrix, and L is some Lipschitz constant. Further, for a general problem class, we propose rate-optimal methods whose rates match the lower bounds (up to a polylog factor). To the best of our knowledge, this is the first time that lower rate bounds and optimal methods have been developed for distributed non-convex optimization problems.
更多
查看译文
关键词
Signal processing algorithms,Optimization,Convergence,Approximation algorithms,Prediction algorithms,Distributed algorithms,Symmetric matrices
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要