Lossy Gradient Compression: How Much Accuracy Can One Bit Buy?

arxiv(2022)

引用 0|浏览0
暂无评分
摘要
In federated learning (FL), a global model is trained at a Parameter Server (PS) by aggregating model updates obtained from multiple remote learners. Critically, the communication between the remote users and the PS is limited by the available power for transmission, while the transmission from the PS to the remote users can be considered unbounded. This gives rise to the distributed learning scenario in which the updates from the remote learners have to be compressed so as to meet communication rate constraints in the uplink transmission toward the PS. For this problem, one would like to compress the model updates so as to minimize the resulting loss in accuracy. In this paper, we take a rate-distortion approach to answer this question for the distributed training of a deep neural network (DNN). In particular, we define a measure of the compression performance, the \emph{per-bit accuracy}, which addresses the ultimate model accuracy that a bit of communication brings to the centralized model. In order to maximize the per-bit accuracy, we consider modeling the gradient updates at remote learners as a generalized normal distribution. Under this assumption on the model update distribution, we propose a class of distortion measures for the design of quantizer for the compression of the model updates. We argue that this family of distortion measures, which we refer to as "$M$-magnitude weighted $L_2$" norm, capture the practitioner intuition in the choice of gradient compressor. Numerical simulations are provided to validate the proposed approach.
更多
查看译文
关键词
compression,much accuracy,bit buy,gradient
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要