Bayesian Federated Model Compression for Communication and Computation Efficiency
arxiv(2024)
摘要
In this paper, we investigate Bayesian model compression in federated
learning (FL) to construct sparse models that can achieve both communication
and computation efficiencies. We propose a decentralized Turbo variational
Bayesian inference (D-Turbo-VBI) FL framework where we firstly propose a
hierarchical sparse prior to promote a clustered sparse structure in the weight
matrix. Then, by carefully integrating message passing and VBI with a
decentralized turbo framework, we propose the D-Turbo-VBI algorithm which can
(i) reduce both upstream and downstream communication overhead during federated
training, and (ii) reduce the computational complexity during local inference.
Additionally, we establish the convergence property for thr proposed
D-Turbo-VBI algorithm. Simulation results show the significant gain of our
proposed algorithm over the baselines in reducing communication overhead during
federated training and computational complexity of final model.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要