A Flexible Stochastic Multi-Agent ADMM Method for Large-Scale Distributed Optimization

IEEE ACCESS(2022)

引用 0|浏览3
暂无评分
摘要
While applying stochastic alternating direction method of multiplier (ADMM) methods has become enormously potential in distributed applications, improving the algorithmic flexibility can bring huge benefits. In this paper, we propose a novel stochastic optimization method based on distributed ADMM method, called Flex-SADMM. Specifically, we incorporate the variance reduced first-order information and the approximated second-order information for solving the subproblem of ADMM, which targets at the stable convergence and improving the accuracy of the search direction. Moreover, different from most ADMM based methods that require each computation node to perform the update in each iteration, we only require each computation node updates within a bounded iteration interval, this has significantly improved the flexibility. We further provide the theoretical results to guarantee the convergence of Flex-SADMM in the nonconvex optimization problems. These results show that our proposed method can successfully overcome the above challenges while the computational complexity is maintained low. In the empirical study, we have verified the effectiveness and the improved flexibility of our proposed method.
更多
查看译文
关键词
Distributed optimization, ADMM, variance reduction, Hessian approximation, flexibility
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要