Negative Yields Positive: Unified Dual-Path Adapter for Vision-Language Models
CoRR(2024)
Abstract
Recently, large-scale pre-trained Vision-Language Models (VLMs) have
demonstrated great potential in learning open-world visual representations, and
exhibit remarkable performance across a wide range of downstream tasks through
efficient fine-tuning. In this work, we innovatively introduce the concept of
dual learning into fine-tuning VLMs, i.e., we not only learn what an image is,
but also what an image isn't. Building on this concept, we introduce a novel
DualAdapter approach to enable dual-path adaptation of VLMs from both positive
and negative perspectives with only limited annotated samples. In the inference
stage, our DualAdapter performs unified predictions by simultaneously
conducting complementary positive selection and negative exclusion across
target classes, thereby enhancing the overall recognition accuracy of VLMs in
downstream tasks. Our extensive experimental results across 15 datasets
validate that the proposed DualAdapter outperforms existing state-of-the-art
methods on both few-shot learning and domain generalization tasks while
achieving competitive computational efficiency. Code is available at
https://github.com/zhangce01/DualAdapter.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined