In-context Prompt Learning for Test-time Vision Recognition with Frozen Vision-language Model
CoRR(2024)
摘要
Existing pre-trained vision-language models, e.g., CLIP, have demonstrated
impressive zero-shot generalization capabilities in various downstream tasks.
However, the performance of these models will degrade significantly when test
inputs present different distributions. To this end, we explore the concept of
test-time prompt tuning (TTPT), which enables the adaptation of the CLIP model
to novel downstream tasks through only one step of optimization on an
unsupervised objective that involves the test sample. Motivated by in-context
learning within field of natural language processing (NLP), we propose
In-Context Prompt Learning (InCPL) for test-time visual recognition task. InCPL
involves associating a new test sample with very few or even just one labeled
example as its in-context prompt. As a result, it can reliably estimate a label
for the test sample, thereby facilitating the model adaptation process. InCPL
first employs a token net to represent language descriptions as visual prompts
that the vision encoder of a CLIP model can comprehend. Paired with in-context
examples, we further propose a context-aware unsupervised loss to optimize test
sample-aware visual prompts. This optimization allows a pre-trained, frozen
CLIP model to be adapted to a test sample from any task using its learned
adaptive prompt. Our method has demonstrated superior performance and achieved
state-of-the-art results across various downstream datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要