SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning
arxiv(2020)
摘要
The attention mechanism is becoming increasingly popular in Natural Language
Processing (NLP) applications, showing superior performance than convolutional
and recurrent architectures. However, general-purpose platforms such as CPUs
and GPUs are inefficient when performing attention inference due to complicated
data movement and low arithmetic intensity. Moreover, existing NN accelerators
mainly focus on optimizing convolutional or recurrent models, and cannot
efficiently support attention. In this paper, we present SpAtten, an efficient
algorithm-architecture co-design that leverages token sparsity, head sparsity,
and quantization opportunities to reduce the attention computation and memory
access. Inspired by the high redundancy of human languages, we propose the
novel cascade token pruning to prune away unimportant tokens in the sentence.
We also propose cascade head pruning to remove unessential heads. Cascade
pruning is fundamentally different from weight pruning since there is no
trainable weight in the attention mechanism, and the pruned tokens and heads
are selected on the fly. To efficiently support them on hardware, we design a
novel top-k engine to rank token and head importance scores with high
throughput. Furthermore, we propose progressive quantization that first fetches
MSBs only and performs the computation; if the confidence is low, it fetches
LSBs and recomputes the attention outputs, trading computation for memory
reduction.
Extensive experiments on 30 benchmarks show that, on average, SpAtten reduces
DRAM access by 10.0x with no accuracy loss, and achieves 1.6x, 3.0x, 162x, 347x
speedup, and 1,4x, 3.2x, 1193x, 4059x energy savings over A3 accelerator,
MNNFast accelerator, TITAN Xp GPU, Xeon CPU, respectively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要