Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
ICLR 2024(2023)
摘要
Given the massive cost of language model pre-training, a non-trivial
improvement of the optimization algorithm would lead to a material reduction on
the time and cost of training. Adam and its variants have been state-of-the-art
for years, and more sophisticated second-order (Hessian-based) optimizers often
incur too much per-step overhead. In this paper, we propose Sophia,
Second-order Clipped Stochastic Optimization, a simple scalable second-order
optimizer that uses a light-weight estimate of the diagonal Hessian as the
pre-conditioner. The update is the moving average of the gradients divided by
the moving average of the estimated Hessian, followed by element-wise clipping.
The clipping controls the worst-case update size and tames the negative impact
of non-convexity and rapid change of Hessian along the trajectory. Sophia only
estimates the diagonal Hessian every handful of iterations, which has
negligible average per-step time and memory overhead. On language modeling with
GPT models of sizes ranging from 125M to 1.5B, Sophia achieves a 2x speed-up
compared to Adam in the number of steps, total compute, and wall-clock time,
achieving the same perplexity with 50
reduced wall-clock time. Theoretically, we show that Sophia, in a much
simplified setting, adapts to the heterogeneous curvatures in different
parameter dimensions, and thus has a run-time bound that does not depend on the
condition number of the loss.
更多查看译文
关键词
large language models,pretraining,optimization in deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要