Federated Learning in the Presence of Adversarial Client Unavailability
CoRR(2023)
摘要
Federated learning is a decentralized machine learning framework that enables
collaborative model training without revealing raw data. Due to the diverse
hardware and software limitations, a client may not always be available for the
computation requests from the parameter server. An emerging line of research is
devoted to tackling arbitrary client unavailability. However, existing work
still imposes structural assumptions on the unavailability patterns, impeding
their applicability in challenging scenarios wherein the unavailability
patterns are beyond the control of the parameter server. Moreover, in harsh
environments like battlefields, adversaries can selectively and adaptively
silence specific clients. In this paper, we relax the structural assumptions
and consider adversarial client unavailability. To quantify the degrees of
client unavailability, we use the notion of ϵ-adversary dropout
fraction. We show that simple variants of FedAvg or FedProx, albeit completely
agnostic to ϵ, converge to an estimation error on the order of
ϵ (G^2 + σ^2) for non-convex global objectives and ϵ(G^2
+ σ^2)/μ^2 for μ strongly convex global objectives, where G is a
heterogeneity parameter and σ^2 is the noise level. Conversely, we prove
that any algorithm has to suffer an estimation error of at least ϵ (G^2
+ σ^2)/8 and ϵ(G^2 + σ^2)/(8μ^2) for non-convex global
objectives and μ-strongly convex global objectives. Furthermore, the
convergence speeds of the FedAvg or FedProx variants are O(1/√(T)) for
non-convex objectives and O(1/T) for strongly-convex objectives, both of
which are the best possible for any first-order method that only has access to
noisy gradients.
更多查看译文
关键词
federated learning,adversarial client unavailability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要