MKOR: Momentum-Enabled Kronecker-Factor-Based Optimizer Using Rank-1 Updates
NeurIPS(2023)
摘要
This work proposes a Momentum-Enabled Kronecker-Factor-Based Optimizer Using
Rank-1 updates, called MKOR, that improves the training time and convergence
properties of deep neural networks (DNNs). Second-order techniques, while
enjoying higher convergence rates vs first-order counterparts, have cubic
complexity with respect to either the model size and/or the training batch
size. Hence they exhibit poor scalability and performance in transformer
models, e.g. large language models (LLMs), because the batch sizes in these
models scale by the attention mechanism sequence length, leading to large model
size and batch sizes. MKOR's complexity is quadratic with respect to the model
size, alleviating the computation bottlenecks in second-order methods. Because
of their high computation complexity, state-of-the-art implementations of
second-order methods can only afford to update the second order information
infrequently, and thus do not fully exploit the promise of better convergence
from these updates. By reducing the communication complexity of the
second-order updates as well as achieving a linear communication complexity,
MKOR increases the frequency of second order updates. We also propose a hybrid
version of MKOR (called MKOR-H) that mid-training falls backs to a first order
optimizer if the second order updates no longer accelerate convergence. Our
experiments show that MKOR outperforms state -of-the-art first order methods,
e.g. the LAMB optimizer, and best implementations of second-order methods, i.e.
KAISA/KFAC, up to 2.57x and 1.85x respectively on BERT-Large-Uncased on 64
GPUs.
更多查看译文
关键词
optimizer,momentum-enabled,kronecker-factor-based
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要