Communication-efficient hierarchical federated learning for IoT heterogeneous systems with imbalanced data

Future Generation Computer Systems(2022)

引用 31|浏览40
暂无评分
摘要
Federated Learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model, without the need to share their local data. It is a promising solution for telemonitoring systems that demand intensive data collection, for detection, classification, and prediction of future events, from different locations while maintaining a strict privacy constraint. Due to privacy concerns and critical communication bottlenecks, it can become impractical to send the FL updated models to a centralized server. Thus, this paper studies the potential of hierarchical FL in Internet of Things (IoT) heterogeneous systems. In particular, we propose an optimized solution for user assignment and resource allocation over hierarchical FL architecture for IoT heterogeneous systems. This work focuses on a generic class of machine learning models that are trained using gradient-descent-based schemes while considering the practical constraints of non-uniformly distributed data across different users. We evaluate the proposed system using two real-world datasets, and we show that it outperforms state-of-the-art FL solutions. Specifically, our numerical results highlight the effectiveness of our approach and its ability to provide 4–6% increase in the classification accuracy, with respect to hierarchical FL schemes that consider distance-based user assignment. Furthermore, the proposed approach could significantly accelerate FL training and reduce communication overhead by providing 75–85% reduction in the communication rounds between edge nodes and the centralized server, for the same model accuracy.
更多
查看译文
关键词
Distributed deep learning,Edge computing,Non-IID data,Internet of Things (IoT),Intelligent health systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要