Cobra: Extending Mamba to Multi-Modal Large Language Model for Efficient Inference
CoRR(2024)
摘要
In recent years, the application of multimodal large language models (MLLM)
in various fields has achieved remarkable success. However, as the foundation
model for many downstream tasks, current MLLMs are composed of the well-known
Transformer network, which has a less efficient quadratic computation
complexity. To improve the efficiency of such basic models, we propose Cobra, a
linear computational complexity MLLM. Specifically, Cobra integrates the
efficient Mamba language model into the visual modality. Moreover, we explore
and study various modal fusion schemes to create an effective multi-modal
Mamba. Extensive experiments demonstrate that (1) Cobra achieves extremely
competitive performance with current computationally efficient state-of-the-art
methods, e.g., LLaVA-Phi, TinyLLaVA, and MobileVLM v2, and has faster
speed due to Cobra's linear sequential modeling. (2) Interestingly, the results
of closed-set challenging prediction benchmarks show that Cobra performs well
in overcoming visual illusions and spatial relationship judgments. (3) Notably,
Cobra even achieves comparable performance to LLaVA with about 43
number of parameters. We will make all codes of Cobra open-source and hope that
the proposed method can facilitate future research on complexity problems in
MLLM. Our project page is available at: https://sites.google.com/view/cobravlm.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要