Hybrid-Order Distributed Sgd: Balancing Communication Overhead, Computational Complexity, and Convergence Rate for Distributed Learning
NEUROCOMPUTING(2024)
Key words
Distributed learning,Stochastic optimization,Distributed optimization,Convergence rate,Communication overhead,Computational complexity,Non-convex,Generalization
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined