MLP Can Be A Good Transformer Learner
CVPR 2024(2024)
摘要
Self-attention mechanism is the key of the Transformer but often criticized
for its computation demands. Previous token pruning works motivate their
methods from the view of computation redundancy but still need to load the full
network and require same memory costs. This paper introduces a novel strategy
that simplifies vision transformers and reduces computational load through the
selective removal of non-essential attention layers, guided by entropy
considerations. We identify that regarding the attention layer in bottom
blocks, their subsequent MLP layers, i.e. two feed-forward layers, can elicit
the same entropy quantity. Meanwhile, the accompanied MLPs are under-exploited
since they exhibit smaller feature entropy compared to those MLPs in the top
blocks. Therefore, we propose to integrate the uninformative attention layers
into their subsequent counterparts by degenerating them into identical mapping,
yielding only MLP in certain transformer blocks. Experimental results on
ImageNet-1k show that the proposed method can remove 40
DeiT-B, improving throughput and memory bound without performance compromise.
Code is available at https://github.com/sihaoevery/lambda_vit.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要