谷歌浏览器插件
订阅小程序
在清言上使用

APPLeNet: Visual Attention Parameterized Prompt Learning for Few-Shot Remote Sensing Image Generalization Using CLIP

2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)(2023)

引用 5|浏览36
暂无评分
摘要
In recent years, the success of large-scale vision-language models (VLMs) such as CLIP has led to their increased usage in various computer vision tasks. These models enable zero-shot inference through carefully crafted instructional text prompts without task-specific supervision. However, the potential of VLMs for generalization tasks in remote sensing (RS) has not been fully realized. To address this research gap, we propose a novel image-conditioned prompt learning strategy called the Visual Attention Parameterized Prompts Learning Network (APPLeNet). APPLeNet emphasizes the importance of multi-scale feature learning in RS scene classification and disentangles visual style and content primitives for domain generalization tasks. To achieve this, APPLeNet combines visual content features obtained from different layers of the vision encoder and style properties obtained from feature statistics of domain-specific batches. An attention-driven injection module is further introduced to generate visual tokens from this information. We also introduce an anti-correlation regularizer to ensure discrimination among the token embeddings, as this visual information is combined with the textual tokens. To validate APPLeNet, we curated four available RS benchmarks and introduced experimental protocols and datasets for three domain generalization tasks. Our results consistently outperform the relevant literature and code is available at https://github.com/mainaksingha01/APPLeNet
更多
查看译文
关键词
anti-correlation regularizer,APPLeNet,attention-driven injection module,CLIP,computer vision tasks,content primitives,crafted instructional text,domain generalization tasks,domain-specific batches,feature statistics,few-shot remote sensing image generalization,image-conditioned prompt learning strategy,large-scale vision-language models,multiscale feature learning,RS benchmarks,RS scene classification,style properties,task-specific supervision,vision encoder,visual attention parameterized prompt learning,visual content features,visual information,visual style,visual tokens,VLMs,zero-shot inference
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要