Domain-Adaptive Vision Transformers for Generalizing Across Visual Domains

Yunsung Cho, Jungmin Yun,Junehyoung Kwon,Youngbin Kim

IEEE Access(2023)

引用 0|浏览4
暂无评分
摘要
Deep-learning models often struggle to generalize well to unseen domains because of the distribution shift between the training and real-world data. Domain generalization aims to train models that can acquire general features from data across different domains, thereby improving the performance on unseen domains. Inspired by the glance-and-gaze approach, which mimics the way humans perceive the real world, we introduce the domain-adaptive vision transformer (DA-ViT) model, which adopts a human cognitive perspective for domain generalization. We merge glance and gaze blocks to initially capture general information from each block and subsequently acquire more detailed and focused information. Unlike previous methods that predominantly employ convolutional neural networks, we adapted the ViT model to learn features that are robust across different visual domains. DA-ViT is pretrained on the ImageNet 1K dataset and designed to adaptively learn features that are generalizable across various visual domains. We evaluated our adapted model for domain generalization and demonstrated that it outperforms the ResNet50 model based on non-ensemble algorithms by $0.7\%p$ on the VLCS benchmark dataset. Our proposed model introduces a new approach for domain generalization that leverages the capabilities of vision transformers to adapt effectively to diverse visual domains.
更多
查看译文
关键词
generalizing across visual domains,vision,domain-adaptive
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要