A Computation-Efficient Decentralized Algorithm for Composite Constrained Optimization

IEEE Transactions on Signal and Information Processing over Networks(2020)

引用 11|浏览19
暂无评分
摘要
This paper focuses on solving the problem of composite constrained convex optimization with a sum of smooth convex functions and non-smooth regularization terms (ℓ 1 norm) subject to locally general constraints. Motivated by the modern large-scale information processing problems in machine learning (the samples of a training dataset are randomly decentralized across multiple computing nodes), each of the smooth objective functions is further considered as the average of several constituent functions. To address the problem in a decentralized fashion, we propose a novel computation-efficient decentralized stochastic gradient algorithm, which leverages the variance reduction technique and the decentralized stochastic gradient projection method with constant step-size. Theoretical analysis indicates that if the constant step-size is less than an explicitly estimated upper bound, the proposed algorithm can find the exact optimal solution in expectation when each constituent function (smooth) is strongly convex. Concerning the existing decentralized schemes, the proposed algorithm not only is suitable for solving the general constrained optimization problems but also possesses low computation cost in terms of the total number of local gradient evaluations. Furthermore, the proposed algorithm via differential privacy strategy can effectively mask the privacy of each constituent function, which is more practical in applications involving sensitive messages, such as military affairs or medical treatment. Finally, numerical evidence is provided to demonstrate the appealing performance of the proposed algorithm.
更多
查看译文
关键词
Composite constrained optimization,computation-efficient,differential privacy,variance reduction,decentralized stochastic algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要