FedComp: A Federated Learning Compression Framework for Resource-Constrained Edge Computing Devices

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS(2024)

引用 0|浏览8
暂无评分
摘要
Top-K sparsification-based compression techniques are popular and powerful for reducing communication costs in federated learning (FL). However, existing Top-K sparsification-based compression methods suffer from two critical issues that severely hinder their implementation, particularly in the context of FL, which often involves a vast number of resource-constrained devices: 1) the low compressibility of the Top-K parameter's indexes significantly limits the overall compression ratio (CR) and 2) the residual accumulation techniques used to maintain the model quality consume huge memory resources. To address these issues, we propose a novel FL compression framework, named FedComp, for deep neural networks (DNNs). FedComp achieves a higher communication CR while maintaining comparable model quality at low memory cost. Specifically, FedComp incorporates the following three key components: 1) a tensor-wise index-sharing mechanism that greatly reduces the index proportion by sharing one index among multiple elements of the tensor; 2) a fine-grained parameters packing strategy that reduces the transmission of duplicate value and index by considering their properties, thereby further reducing the overall communication cost; and 3) a residual compressor that significantly reduces memory cost by enhancing the compressibility of floating-point residuals and achieving a high CR with a lossless encoding scheme. Experiments on mainstream machine learning (ML) tasks with different DNN structures and datasets demonstrate that our proposed FedComp outperforms the state-of-the-art FL compression algorithms by achieving a higher communication CR of up to 28.5x while reducing memory costs by 21.04x - 50.59x on the local residual model, without degrading FL training performance.
更多
查看译文
关键词
Indexes,Costs,Training,Servers,Quantization (signal),Encoding,Integrated circuit modeling,Communication compression,deep neural network (DNN),federated learning (FL),memory overhead,top-K sparsification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要