FedSPU: Personalized Federated Learning for Resource-constrained Devices with Stochastic Parameter Update
arxiv(2024)
摘要
Personalized Federated Learning (PFL) is widely employed in IoT applications
to handle high-volume, non-iid client data while ensuring data privacy.
However, heterogeneous edge devices owned by clients may impose varying degrees
of resource constraints, causing computation and communication bottlenecks for
PFL. Federated Dropout has emerged as a popular strategy to address this
challenge, wherein only a subset of the global model, i.e. a
sub-model, is trained on a client's device, thereby reducing
computation and communication overheads. Nevertheless, the dropout-based
model-pruning strategy may introduce bias, particularly towards non-iid local
data. When biased sub-models absorb highly divergent parameters from other
clients, performance degradation becomes inevitable. In response, we propose
federated learning with stochastic parameter update (FedSPU). Unlike dropout
that tailors the global model to small-size local sub-models, FedSPU maintains
the full model architecture on each device but randomly freezes a certain
percentage of neurons in the local model during training while updating the
remaining neurons. This approach ensures that a portion of the local model
remains personalized, thereby enhancing the model's robustness against biased
parameters from other clients. Experimental results demonstrate that FedSPU
outperforms federated dropout by 7.57% on average in terms of accuracy.
Furthermore, an introduced early stopping scheme leads to a significant
reduction of the training time by 24.8%∼70.4% while maintaining high
accuracy.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要