Towards addressing aggregation deviation for model training in resource-scarce edge environment

JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES(2024)

引用 0|浏览2
暂无评分
摘要
Federated Learning (FL) can train models in an edge environment without sending raw data. However, the performance is still constrained by data heterogeneity. To address the problems of data heterogeneity and resource scarcity in edge devices, we propose Federal Learning via Dynamic Aggregation (FedDA), which eliminates the influence of data heterogeneity and improves model accuracy. FedDA updates the impact of individual local models on the global model in real-time at different stages. It adjusts the local epoch in each round to prevent the device from dropping out while obtaining a more accurate local model. The core module is the model impact factor (MIF) that inscribes the aggregation weights to solve the impact of fixed weights on the aggregation model with improper extraction of local information. We conducted several experiments to evaluate the convergence speed using different algorithms on the MINIST. FedDA consistently outperforms the other six SOTA algorithms on MNIST, Cifar10, and Cifar100 datasets. In significant data heterogeneity, FedDA improves accuracy by up to 6% over the different algorithms and at least about 3%, especially in resource-scarce environments. To reach the specified accuracy, FedDA is 3 times faster than SCAFFOLD and at least 50% faster than other algorithms.
更多
查看译文
关键词
Edge computing,Federated learning,Aggregation algorithm,Efficient communication,Resource constraint
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要