Chrome Extension
WeChat Mini Program
Use on ChatGLM

Compressing Large Language Models Using Low Rank and Low Precision Decomposition

NeurIPS 2024(2024)

Cited 0|Views27
Key words
Large Language Models (LLMs),Model Compression,Post-training Quantization,Low-Rank Decomposition,Low-Precision Formats,Quantization Error Analysis,Rank-Constrained Regression,Randomized Linear Algebra,Sketching
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined