SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing

AUTOMATICA(2024)

引用 0|浏览0
暂无评分
摘要
There is a growing interest in the distributed optimization framework that goes under the name of Federated Learning (FL). In particular, much attention is being turned to FL scenarios where the network is strongly heterogeneous in terms of communication resources (e.g., bandwidth) and data distribution. In these cases, communication between local machines (agents) and the central server (Master) is a main consideration. In this work, we present SHED, an original communicationconstrained Newton-type (NT) algorithm designed to accelerate FL in such scenarios. SHED is by design robust to non independent identically distributed (non i.i.d.) data distributions, handles heterogeneity of agents' communication resources (CRs), only requires sporadic Hessian computations, and achieves global asymptotic super-linear convergence. This is possible thanks to an incremental strategy, based on eigendecomposition of the local Hessian matrices, which exploits (possibly) outdated secondorder information. SHED is thoroughly validated on real datasets by assessing (i) the number of communication rounds required for convergence, (ii) the overall amount of data transmitted and (iii) the number of local Hessian computations. For all these metrics, SHED shows superior performance against state-of-the art techniques like BFGS, GIANT and FedNL. (c) 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
更多
查看译文
关键词
Newton method,Distributed optimization,Federated learning,Super-linear convergence,Heterogeneous networks,Non i.i.d. data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要