Structured Bayesian Federated Learning for Green AI: A Decentralized Model Compression Using Turbo-VBI-Based Approach

IEEE INTERNET OF THINGS JOURNAL(2024)

引用 0|浏览0
暂无评分
摘要
Although deep neural networks (DNNs) have been remarkably successful in numerous areas, the performance of DNN is compromised in federated learning (FL) scenarios because of the large model size. A large model can induce huge communication overhead during the federated training, and also induce infeasible storage and computation burden at the clients during the inference. To address these issues, we investigate structured model compression in FL to construct sparse models with regular structure such that they require significantly less communication, storage and computation resources. We do this by proposing a three-layer hierarchical prior, which can promote a common regular sparse structure in the local models. We design a decentralized Turbo variational Bayesian inference (D-Turbo-VBI) algorithm to solve the resulting federated training problem. With the common regular sparse structure, both upstream and downstream communication overhead can be reduced, and the final model also has a regular sparse structure, which requires significantly less local storage and computation resources. Simulation results demonstrate that our proposed algorithm can efficiently reduce the communication overhead during federated training and the resulting model can achieve a significantly lower sparsity rate and inference time compared to the baselines while maintaining a competitive accuracy.
更多
查看译文
关键词
Communication efficiency,deep learning,federated learning (FL),model compression,structured sparsity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要