Navigating Heterogeneity and Privacy in One-Shot Federated Learning with Diffusion Models
arxiv(2024)
摘要
Federated learning (FL) enables multiple clients to train models collectively
while preserving data privacy. However, FL faces challenges in terms of
communication cost and data heterogeneity. One-shot federated learning has
emerged as a solution by reducing communication rounds, improving efficiency,
and providing better security against eavesdropping attacks. Nevertheless, data
heterogeneity remains a significant challenge, impacting performance. This work
explores the effectiveness of diffusion models in one-shot FL, demonstrating
their applicability in addressing data heterogeneity and improving FL
performance. Additionally, we investigate the utility of our diffusion model
approach, FedDiff, compared to other one-shot FL methods under differential
privacy (DP). Furthermore, to improve generated sample quality under DP
settings, we propose a pragmatic Fourier Magnitude Filtering (FMF) method,
enhancing the effectiveness of generated data for global model training.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要