Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT

IEEE Internet of Things Journal(2020)

引用 326|浏览170
暂无评分
摘要
The rapidly expanding number of Internet of Things (IoT) devices is generating huge quantities of data, but public concern over data privacy means users are apprehensive to send data to a central server for machine learning (ML) purposes. The easily changed behaviors of edge infrastructure that software-defined networking (SDN) provides makes it possible to collate IoT data at edge servers and gateways, where federated learning (FL) can be performed: building a central model without uploading data to the server. FedAvg is an FL algorithm which has been the subject of much study, however, it suffers from a large number of rounds to convergence with non-independent identically distributed (non-IID) client data sets and high communication costs per round. We propose adapting FedAvg to use a distributed form of Adam optimization, greatly reducing the number of rounds to convergence, along with the novel compression techniques, to produce communication-efficient FedAvg (CE-FedAvg). We perform extensive experiments with the MNIST/CIFAR-10 data sets, IID/non-IID client data, varying numbers of clients, client participation rates, and compression rates. These show that CE-FedAvg can converge to a target accuracy in up to $\mathbf {6\times }$ less rounds than similarly compressed FedAvg, while uploading up to $\mathbf {3\times }$ less data, and is more robust to aggressive compression. Experiments on an edge-computing-like testbed using Raspberry Pi clients also show that CE-FedAvg is able to reach a target accuracy in up to $\mathbf {1.7\times }$ less real time than FedAvg.
更多
查看译文
关键词
Servers,Computational modeling,Internet of Things,Training,Data models,Convergence,Logic gates
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要