ALLaVA: Harnessing GPT4V-Synthesized Data for Lite Vision-Language Models
arxiv(2024)
摘要
Large vision-language models (LVLMs) have shown premise in a broad range of
vision-language tasks with their strong reasoning and generalization
capabilities. However, they require considerable computational resources for
training and deployment. This study aims to bridge the performance gap between
traditional-scale LVLMs and resource-friendly lite versions by adopting
high-quality training data. To this end, we propose a comprehensive pipeline
for generating a synthetic dataset. The key idea is to leverage strong
proprietary models to generate (i) fine-grained image annotations for
vision-language alignment and (ii) complex reasoning visual question-answering
pairs for visual instruction fine-tuning, yielding 1.3M samples in total. We
train a series of lite VLMs on the synthetic dataset and experimental results
demonstrate the effectiveness of the proposed scheme, where they achieve
competitive performance on 17 benchmarks among 4B LVLMs, and even perform on
par with 7B/13B-scale models on various benchmarks. This work highlights the
feasibility of adopting high-quality data in crafting more efficient LVLMs. We
name our dataset ALLaVA, and open-source it to research community for
developing better resource-efficient LVLMs for wider usage.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要