M^2Chat: Empowering VLM for Multimodal LLM Interleaved Text-Image Generation

arxiv(2023)

引用 0|浏览11
暂无评分
摘要
While current LLM chatbots like GPT-4V bridge the gap between human instructions and visual representations to enable text-image generations, they still lack efficient alignment methods for high-fidelity performance on multiple downstream tasks. In this paper, we propose M^2Chat, a novel unified multimodal LLM framework for generating interleaved text-image conversation across various scenarios. Specifically, we propose an M^3Adapter that efficiently integrates granular low-level visual information and high-level semantic features from multi-modality prompts. Upon the well-aligned fused feature, M^3Adapter tailors a learnable gating strategy to balance the model creativity and consistency across various tasks adaptively. Moreover, to further enhance the effectiveness of M^3Adapter while preserving the coherence of semantic context comprehension, we introduce a two-stage M^3FT fine-tuning strategy. This strategy optimizes disjoint groups of parameters for image-text alignment and visual-instruction respectively. Extensive experiments demonstrate our M^2Chat surpasses state-of-the-art counterparts across diverse benchmarks, showcasing its prowess in interleaving generation, storytelling, and multimodal dialogue systems. The demo and code are available at https://mattie-e.github.io/M2Chat.github.io.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要