Is Flash Attention Stable?
arxiv(2024)
摘要
Training large-scale machine learning models poses distinct system
challenges, given both the size and complexity of today's workloads. Recently,
many organizations training state-of-the-art Generative AI models have reported
cases of instability during training, often taking the form of loss spikes.
Numeric deviation has emerged as a potential cause of this training
instability, although quantifying this is especially challenging given the
costly nature of training runs. In this work, we develop a principled approach
to understanding the effects of numeric deviation, and construct proxies to put
observations into context when downstream effects are difficult to quantify. As
a case study, we apply this framework to analyze the widely-adopted Flash
Attention optimization. We find that Flash Attention sees roughly an order of
magnitude more numeric deviation as compared to Baseline Attention at BF16 when
measured during an isolated forward pass. We then use a data-driven analysis
based on the Wasserstein Distance to provide upper bounds on how this numeric
deviation impacts model weights during training, finding that the numerical
deviation present in Flash Attention is 2-5 times less significant than
low-precision training.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要