FedVAE: Communication-Efficient Federated Learning With Non-IID Private Data

IEEE SYSTEMS JOURNAL(2023)

引用 0|浏览17
暂无评分
摘要
Federated learning (FL), collaboratively training a shared global model without exchanging and centralizing local data, provides a promising solution for privacy preserving. On the other hand, it is faced with two main challenges: First, high communication cost, and second, low model quality due to imbalanced or nonindependent and identically distributed (non-IID) data. In this article, we propose FedVAE, an FL framework based on variational autoencoder (VAE) for remote patient monitoring. FedVAE contains two lightweight VAEs: one for projecting data onto a lower dimensional space with similar distribution so as to alleviate the issues of excessive communication overhead and slow convergence rate caused by non-IID data, and the other for shunning training bias due to imbalanced data distribution through generating minority class samples. In general, the proposed FedVAE can improve the overall performance of FL models while consuming only a small amount of communication bandwidth. The experimental results show that the area under the curve (AUC) value of FedVAE can reach 0.9937, which is even higher than that of the traditional centralized model (0.9931). Besides, fine-tuning the global model with personalization can raise the average AUC to 0.9947. Moreover, compared with vanilla FL, FedVAE shows 0.87% improvement in AUC while reducing communication traffic by at least 95%.
更多
查看译文
关键词
Anomaly detection,federated learning (FL),healthcare,variational autoencoder (VAE)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要