CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers
arxiv(2023)
摘要
Recent vision-language models have achieved tremendous advances. However,
their computational costs are also escalating dramatically, making model
acceleration exceedingly critical. To pursue more efficient vision-language
Transformers, this paper introduces Cross-Guided Ensemble of Tokens (CrossGET),
a general acceleration framework for vision-language Transformers. This
framework adaptively combines tokens in real-time during inference,
significantly reducing computational costs while maintaining high performance.
CrossGET features two primary innovations: 1) Cross-Guided Matching and
Ensemble. CrossGET leverages cross-modal guided token matching and ensemble to
effectively utilize cross-modal information, achieving wider applicability
across both modality-independent models, e.g., CLIP, and modality-dependent
ones, e.g., BLIP2. 2) Complete-Graph Soft Matching. CrossGET introduces an
algorithm for the token-matching mechanism, ensuring reliable matching results
while facilitating parallelizability and high efficiency. Extensive experiments
have been conducted on various vision-language tasks, such as image-text
retrieval, visual reasoning, image captioning, and visual question answering.
The performance on both classic multimodal architectures and emerging
multimodal LLMs demonstrates the framework's effectiveness and versatility. The
code is available at https://github.com/sdc17/CrossGET.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要