LongVQ: Long Sequence Modeling with Vector Quantization on Structured Memory
arxiv(2024)
摘要
Transformer models have been successful in various sequence processing tasks,
but the self-attention mechanism's computational cost limits its practicality
for long sequences. Although there are existing attention variants that improve
computational efficiency, they have a limited ability to abstract global
information effectively based on their hand-crafted mixing strategies. On the
other hand, state-space models (SSMs) are tailored for long sequences but
cannot capture complicated local information. Therefore, the combination of
them as a unified token mixer is a trend in recent long-sequence models.
However, the linearized attention degrades performance significantly even when
equipped with SSMs. To address the issue, we propose a new method called
LongVQ. LongVQ uses the vector quantization (VQ) technique to compress the
global abstraction as a length-fixed codebook, enabling the linear-time
computation of the attention matrix. This technique effectively maintains
dynamic global and local patterns, which helps to complement the lack of
long-range dependency issues. Our experiments on the Long Range Arena
benchmark, autoregressive language modeling, and image and speech
classification demonstrate the effectiveness of LongVQ. Our model achieves
significant improvements over other sequence models, including variants of
Transformers, Convolutions, and recent State Space Models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要