Athena: Efficient Block-Wise Post-Training Quantization for Large Language Models Using Second-Order Matrix Derivative Information
CoRR(2024)
Abstract
Large Language Models (LLMs) have significantly advanced natural language
processing tasks such as machine translation, text generation, and sentiment
analysis. However, their large size, often consisting of billions of
parameters, poses challenges for storage, computation, and deployment,
particularly in resource-constrained environments like mobile devices and edge
computing platforms. Effective compression and quantization techniques are
crucial for addressing these issues, reducing memory footprint and
computational requirements without significantly compromising performance.
Traditional methods that uniformly map parameters to compressed spaces fail to
account for the uneven distribution of parameters, leading to substantial
accuracy loss. In this work, we propose Athena, a novel algorithm for efficient
block-wise post-training quantization of LLMs. Athena leverages Second-Order
Matrix Derivative Information to guide the quantization process using the
curvature information of the loss landscape. By grouping parameters by columns
or rows and iteratively optimizing the quantization process, Athena updates the
model parameters and Hessian matrix to achieve significant compression while
maintaining high accuracy. This makes Athena a practical solution for deploying
LLMs in various settings.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined