Chrome Extension
WeChat Mini Program
Use on ChatGLM

LLM in a Flash: Efficient Large Language Model Inference with Limited Memory.

Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1 Long Papers)(2024)

Cited 46|Views3983
No score
Abstract
Large language models (LLMs) are central to modern natural languageprocessing, delivering exceptional performance in various tasks. However, theirsubstantial computational and memory requirements present challenges,especially for devices with limited DRAM capacity. This paper tackles thechallenge of efficiently running LLMs that exceed the available DRAM capacityby storing the model parameters in flash memory, but bringing them on demand toDRAM. Our method involves constructing an inference cost model that takes intoaccount the characteristics of flash memory, guiding us to optimize in twocritical areas: reducing the volume of data transferred from flash and readingdata in larger, more contiguous chunks. Within this hardware-informedframework, we introduce two principal techniques. First, "windowing"strategically reduces data transfer by reusing previously activated neurons,and second, "row-column bundling", tailored to the sequential data accessstrengths of flash memory, increases the size of data chunks read from flashmemory. These methods collectively enable running models up to twice the sizeof the available DRAM, with a 4-5x and 20-25x increase in inference speedcompared to naive loading approaches in CPU and GPU, respectively. Ourintegration of sparsity awareness, context-adaptive loading, and ahardware-oriented design paves the way for effective inference of LLMs ondevices with limited memory.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined