Tandem Transformers for Inference Efficient LLMs
CoRR(2024)
摘要
The autoregressive nature of conventional large language models (LLMs)
inherently limits inference speed, as tokens are generated sequentially. While
speculative and parallel decoding techniques attempt to mitigate this, they
face limitations: either relying on less accurate smaller models for generation
or failing to fully leverage the base LLM's representations.
We introduce a novel architecture, Tandem transformers, to address these
issues. This architecture uniquely combines (1) a small autoregressive model
and (2) a large model operating in block mode (processing multiple tokens
simultaneously). The small model's predictive accuracy is substantially
enhanced by granting it attention to the large model's richer representations.
On the PaLM2 pretraining dataset, a tandem of PaLM2-Bison and PaLM2-Gecko
demonstrates a 3.3
standalone PaLM2-Gecko, offering a 1.16x speedup compared to a PaLM2-Otter
model with comparable downstream performance. We further incorporate the tandem
model within the speculative decoding (SPEED) framework where the large model
validates tokens from the small model. This ensures that the Tandem of
PaLM2-Bison and PaLM2-Gecko achieves substantial speedup (around 1.14x faster
than using vanilla PaLM2-Gecko in SPEED) while maintaining identical downstream
task accuracy.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要