Federated Learning联邦学习(Federated Learning)是一种新兴的人工智能基础技术,在 2016 年由谷歌最先提出,原本用于解决安卓手机终端用户在本地更新模型的问题,其设计目标是在保障大数据交换时的信息安全、保护终端数据和个人数据隐私、保证合法合规的前提下,在多参与方或多计算结点之间开展高效率的机器学习。其中,联邦学习可使用的机器学习算法不局限于神经网络,还包括随机森林等重要算法。联邦学习有望成为下一代人工智能协同算法和协作网络的基础。
Kairouz Peter,McMahan H. Brendan, Avent Brendan, Bellet Aurélien,Bennis Mehdi, Bhagoji Arjun Nitin, Bonawitz Keith, Charles Zachary,Cormode Graham,Cummings Rachel, D'Oliveira Rafael G. L., Rouayheb Salim El
Found. Trends Mach. Learn., no. 1-2 (2021): 1-210
The breadth of papers surveyed in this work suggests that federated learning is gaining traction in a wide range of interdisciplinary fields: from machine learning to optimization to information theory and statistics to cryptography, fairness, and privacy
Cited by513BibtexViews192DOI
0
0
IEEE Trans. Wirel. Commun., no. 3 (2021): 1935-1949
We have derived the time and energy consumption models for federated learning based on the convergence rate
Cited by77BibtexViews159DOI
0
0
Chen Mingzhe,Poor H. Vincent,Saad Walid, Cui Shuguang
IEEE Transactions on Wireless Communications, no. 4 (2021): 2457-2471
We have proposed a probablistic user selection scheme that allows the users whose local federated learning models have large effects on the global FL model to associate with the base station with high probability
Cited by38BibtexViews68DOI
0
0
IEEE Journal on Selected Areas in Communications, no. 7 (2021): 2168-2181
In this framework, training is coordinated by a central server who maintains a global model, which is updated by the mobile users through an iterative process
Cited by1BibtexViews93DOI
0
0
IEEE/ACM Trans. Netw., no. 1 (2021): 398-409
We provide the convergence rate characterizing the trade-off between local computation rounds of user equipments to update its local model and global communication rounds to update the Federated Learning global model
Cited by0BibtexViews110DOI
0
0
Howard H. Yang, Zuozhu Liu,Tony Q. S. Quek,H. Vincent Poor
IEEE Transactions on Communications, no. 1 (2020): 317-333
Our analysis has shown that running federated learning with proportional fair is able to achieve much smaller iteration time than random scheduling and round robin if the network is operating under a high signal-to-interference-plus-noise ratio threshold, while RR is more prefera...
Cited by111BibtexViews104DOI
0
0
IEEE Communications Letters, no. 6 (2020): 1279-1283
We numerically evaluate the proposed blockFL’s average learning completion latency E =
Cited by108BibtexViews97DOI
0
0
Felix Sattler, Simon Wiedemann,Klaus-Robert Müller,Wojciech Samek
IEEE transactions on neural networks and learning systems, no. 9 (2020): 1-14
There lays huge potential in harnessing the rich data provided by Internet of Things devices for the training and improving of deep learning models
Cited by58BibtexViews89DOI
0
0
Jiawen Kang, Zehui Xiong,Dusit Niyato, Yuze Zou, Yang Zhang,Mohsen Guizani
IEEE Wireless Communications, no. 2 (2020): 72-80
We addressed worker selection issues to ensure reliable federated learning in mobile networks
Cited by51BibtexViews154DOI
0
0
Liu Yi, Yu James J. Q.,Kang Jiawen,Niyato Dusit, Zhang Shuyu
IEEE Internet Things J., no. 8 (2020): 7751-7763
We demonstrate by empirical studies that the proposed jointannouncement protocol is efficient in reducing the communication overhead for FedGRU by 64.10% compared with centralized models
Cited by35BibtexViews87DOI
0
0
Lyu Lingjuan, Yu Han,Yang Qiang
Based on the distribution of data features and data samples among participants, federated learning can be generally classified as horizontally federated learning, vertically federated learning and federated transfer learning
Cited by35BibtexViews88
0
0
Shashi Raj Pandey,Nguyen H. Tran,Mehdi Bennis, Yan Kyaw Tun, Aunas Manzoor,Choong Seon Hong
IEEE Transactions on Wireless Communications, no. 5 (2020): 3241-3256
An incentive mechanism has been established to enable the participation of several devices in Federated learning
Cited by33BibtexViews90DOI
0
0
We proposed LG-FEDAVG combining local representation learning with federated training of global models
Cited by30BibtexViews159
0
0
IEEE Transactions on Wireless Communications, no. 5 (2020): 3546-3557
We have proposed the CA-distributed stochastic gradient descent scheme, where each device employs gradient sparsification with error accumulation followed by linear projection to reduce the typically very large parameter vector dimension to the limited channel bandwidth
Cited by27BibtexViews140DOI
0
0
Hangyu Zhu,Yaochu Jin
IEEE Transactions on Neural Networks and Learning Systems, no. 4 (2020): 1310-1322
Our experimental results indicate that the proposed algorithm can significantly reduce the complexity of the neural network models at the expense of minor performance degradation of the global model, thereby reducing the server-client communication
Cited by24BibtexViews94DOI
0
0
ICLR, (2020)
We presented Federated Matched Averaging, a layer-wise federated learning algorithm designed for modern convolutional neural networks and LSTMs architectures that accounts for permutation invariance of the neurons and permits global model size adaptation
Cited by23BibtexViews234
0
0
We studied a personalized variant of the classic Federated Learning formulation in which our goal is to find a proper initialization model for the users in the network that can be quickly adapted to the local data of each user after the training phase
Cited by23BibtexViews117
0
0
ICLR, (2020)
We propose q-Fair Federated Learning, a novel optimization objective inspired by fair resource allocation in wireless networks that encourages fairer accuracy distributions across devices in federated learning
Cited by18BibtexViews131
0
0
We provided learning theoretic guarantees and efficient algorithms
Cited by17BibtexViews165
0
0
Daniel Rothchild, Ashwinee Panda, Enayat Ullah, Nikita Ivkin,Vladimir Braverman,Joseph Gonzalez,Ion Stoica,Raman Arora
ICML 2020, (2020)
Federated learning has seen a great deal of research interest recently, in the domain of communication efficiency
Cited by14BibtexViews291
0
0
小科