Communication-Efficient and Attack-Resistant Federated Edge Learning With Dataset Distillation

IEEE Transactions on Cloud Computing(2023)

引用 2|浏览20
暂无评分
摘要
Federated Edge Learning considers a large amount of distributed edge nodes collectively train a global gradient-based model for edge computing in the Artificial Internet of Things, which significantly promotes the development of cloud computing. However, current federated learning algorithms take tens of communication rounds transmitting unwieldy model weights under ideal circumstances and hundreds when data is poorly distributed. This drawback directly results in expensive communication overhead for edge devices. Inspired by recent work on dataset distillation and distributed one-shot learning, we propose Distilled One-Shot Federated Learning (DOSFL) to significantly reduce the communication cost while achieving comparable performance. In just one round, each client distills their private dataset, sends the synthetic data to the server, and collectively trains a global model. The distilled data look like noise and are only useful to the specific model weights, i.e., become useless after the model updates. With this weight-less and gradient-less design, the total communication cost of DOSFL is up to three orders of magnitude less than FedAvg while preserving up to 99% performance of centralized training on both vision and language tasks with different models including CNN, LSTM, Transformer, etc . We demonstrate that an eavesdropping attacker cannot properly train a good model using the leaked distilled data, without knowing the initial model weights. DOSFL serves as an inexpensive method to quickly converge on a performant pre-trained model with less than 0.1% communication cost of traditional methods.
更多
查看译文
关键词
dataset distillation,edge,learning,communication-efficient,attack-resistant
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要