Improved Convergence Rates for Non-Convex Federated Learning with Compression
Abstract:
Federated learning is a new distributed learning paradigm that enables efficient training of emerging large-scale machine learning models. In this paper, we consider federated learning on non-convex objectives with compressed communication from the clients to the central server. We propose a novel first-order algorithm (\texttt{FedSTEPH...More
Code:
Data:
Full Text
Tags
Comments