HiFT: A Hierarchical Full Parameter Fine-Tuning Strategy
CoRR(2024)
摘要
Full-parameter fine-tuning has become the go-to choice for adapting language
models (LMs) to downstream tasks due to its excellent performance. As LMs grow
in size, fine-tuning the full parameters of LMs requires a prohibitively large
amount of GPU memory. Existing approaches utilize zeroth-order optimizer to
conserve GPU memory, which can potentially compromise the performance of LMs as
non-zero order optimizers tend to converge more readily on most downstream
tasks. In this paper, we propose a novel optimizer-independent end-to-end
hierarchical fine-tuning strategy, HiFT, which only updates a subset of
parameters at each training step. HiFT can significantly reduce the amount of
gradients and optimizer state parameters residing in GPU memory at the same
time, thereby reducing GPU memory usage. Our results demonstrate that: (1) HiFT
achieves comparable performance to parameter-efficient fine-tuning and standard
full parameter fine-tuning. (2) HiFT supports various optimizers including
AdamW, AdaGrad, SGD, etc. (3) HiFT can save more than 60% GPU memory compared
with standard full-parameter fine-tuning for 7B model. (4) HiFT enables
full-parameter fine-tuning of a 7B model on single 48G A6000 with a precision
of 32 using the AdamW optimizer, without using any memory saving techniques.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要