Enhancing One-Shot Federated Learning Through Data and Ensemble Co-Boosting
ICLR 2024(2024)
摘要
One-shot Federated Learning (OFL) has become a promising learning paradigm,
enabling the training of a global server model via a single communication
round. In OFL, the server model is aggregated by distilling knowledge from all
client models (the ensemble), which are also responsible for synthesizing
samples for distillation. In this regard, advanced works show that the
performance of the server model is intrinsically related to the quality of the
synthesized data and the ensemble model. To promote OFL, we introduce a novel
framework, Co-Boosting, in which synthesized data and the ensemble model
mutually enhance each other progressively. Specifically, Co-Boosting leverages
the current ensemble model to synthesize higher-quality samples in an
adversarial manner. These hard samples are then employed to promote the quality
of the ensemble model by adjusting the ensembling weights for each client
model. Consequently, Co-Boosting periodically achieves high-quality data and
ensemble models. Extensive experiments demonstrate that Co-Boosting can
substantially outperform existing baselines under various settings. Moreover,
Co-Boosting eliminates the need for adjustments to the client's local training,
requires no additional data or model transmission, and allows client models to
have heterogeneous architectures.
更多查看译文
关键词
federated learning,
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要