Chrome Extension
WeChat Mini Program
Use on ChatGLM

SUBLLM: A Novel Efficient Architecture with Token Sequence Subsampling for LLM

Quandong Wang, Yuxuan Yuan,Xiaoyu Yang, Ruike Zhang,Kang Zhao,Wei Liu,Jian Luan, Daniel Povey,Bin Wang

CoRR(2024)

Cited 0|Views22
No score
Abstract
While Large Language Models (LLMs) have achieved remarkable success in various fields, the efficiency of training and inference remains a major challenge. To address this issue, we propose SUBLLM, short for Subsampling-Upsampling-Bypass Large Language Model, an innovative architecture that extends the core decoder-only framework by incorporating subsampling, upsampling, and bypass modules. The subsampling modules are responsible for shortening the sequence, while the upsampling modules restore the sequence length, and the bypass modules enhance convergence. In comparison to LLaMA, the proposed SUBLLM exhibits significant enhancements in both training and inference speeds as well as memory usage, while maintaining competitive few-shot performance. During training, SUBLLM increases speeds by 26 memory by 10GB per GPU. In inference, it boosts speeds by up to 37 memory by 1GB per GPU. The training and inference speeds can be enhanced by 34 and 52 available at https://github.com/XiaoMi/subllm.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined