Pixel Aligned Language Models
CoRR(2023)
摘要
Large language models have achieved great success in recent years, so as
their variants in vision. Existing vision-language models can describe images
in natural languages, answer visual-related questions, or perform complex
reasoning about the image. However, it is yet unclear how localization tasks,
such as word grounding or referring localization, can be performed using large
language models. In this work, we aim to develop a vision-language model that
can take locations, for example, a set of points or boxes, as either inputs or
outputs. When taking locations as inputs, the model performs
location-conditioned captioning, which generates captions for the indicated
object or region. When generating locations as outputs, our model regresses
pixel coordinates for each output word generated by the language model, and
thus performs dense word grounding. Our model is pre-trained on the Localized
Narrative dataset, which contains pixel-word-aligned captioning from human
attention. We show our model can be applied to various location-aware
vision-language tasks, including referring localization, location-conditioned
captioning, and dense object captioning, archiving state-of-the-art performance
on RefCOCO and Visual Genome. Project page: https://jerryxu.net/PixelLLM .
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要