Prox-DBRO-VR: A Unified Analysis on Decentralized Byzantine-Resilient Composite Stochastic Optimization with Variance Reduction and Non-Asymptotic Convergence Rates
arxiv(2023)
摘要
Decentralized stochastic gradient algorithms resolve efficiently large-scale
finite-sum optimization problems when all agents over networks are reliable.
However, most of these algorithms are not resilient to adverse conditions, such
as malfunctioning agents, software bugs, and cyber attacks. This paper aims to
handle a class of general composite finite-sum optimization problems over
multi-agent cyber-physical systems (CPSs) in the presence of an unknown number
of Byzantine agents. Based on the proximal mapping method, variance-reduced
(VR) techniques, and a norm-penalized approximation strategy, we propose a
decentralized Byzantine-resilient and proximal-gradient algorithmic framework,
dubbed Prox-DBRO-VR,which achieves an optimization and control goal using only
local computations and communications. To reduce asymptotically the variance
generated by evaluating the local noisy stochastic gradients, we incorporate
two localized VR techniques (SAGA and LSVRG) into Prox-DBRO-VR to design
Prox-DBRO-SAGA and Prox-DBRO-LSVRG. By analyzing the contraction relationships
among the gradient-learning error, robust consensus condition, and optimality
gap in a unified theoretical framework, it is demonstrated that both
Prox-DBRO-SAGA and Prox-DBRO-LSVRG,with a well-designed constant (resp.,
decaying) step-size, converge linearly (resp., sublinearly) inside an error
ball around the optimal solution to the original problem under standard
assumptions. The trade-off between convergence accuracy and the number of
Byzantine agents in both linear and sub-linear cases is also characterized. In
simulation, the effectiveness and practicability of the proposed algorithms are
manifested via resolving a decentralized sparse machine-learning problem over
multi-agent CPSs under various Byzantine attacks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要