Communication Efficient Distributed Training with Distributed Lion
CoRR(2024)
Abstract
The Lion optimizer has been a promising competitor with the AdamW for
training large AI models, with advantages on memory, computation, and sample
efficiency. In this paper, we introduce Distributed Lion, an innovative
adaptation of Lion for distributed training environments. Leveraging the sign
operator in Lion, our Distributed Lion only requires communicating binary or
lower-precision vectors between workers to the center server, significantly
reducing the communication cost. Our theoretical analysis confirms Distributed
Lion's convergence properties. Empirical results demonstrate its robustness
across a range of tasks, worker counts, and batch sizes, on both vision and
language problems. Notably, Distributed Lion attains comparable performance to
standard Lion or AdamW optimizers applied on aggregated gradients, but with
significantly reduced communication bandwidth. This feature is particularly
advantageous for training large models. In addition, we also demonstrate that
Distributed Lion presents a more favorable performance-bandwidth balance
compared to existing efficient distributed methods such as deep gradient
compression and ternary gradients.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined