Classifier Decoupled Training for Black-Box Unsupervised Domain Adaptation

Xiangchuang Chen,Yunhang Shen, Xuan Luo,Yan Zhang,Ke Li,Shaohui Lin

PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT III(2024)

引用 0|浏览13
暂无评分
摘要
Black-box unsupervised domain adaptation (B2UDA) is a challenging task in unsupervised domain adaptation, where the source model is treated as a black box and only its output is accessible. Previous works have treated the source models as a pseudo-labeling tool and formulated B2UDA as a noisy labeled learning (LNL) problem. However, they have ignored the gap between the "shift noise" caused by the domain shift and the hypothesis noise in LNL. To alleviate the negative impact of shift noise on B2 UDA, we propose a novel framework called Classifier Decoupling Training (CDT), which introduces two additional classifiers to assist model training with a new label-confidence sampling. First, we introduce a self-training classifier to learn robust feature representation from the low-confidence samples, which is discarded during testing, and the final classifier is only trained with a few high-confidence samples. This step decouples the training of high-confidence and low-confidence samples to mitigate the impact of noise labels on the final classifier while avoiding overfitting to the few confident samples. Second, an adversarial classifier optimizes the feature distribution of low-confidence samples to be biased toward high-confidence samples through adversarial training, which greatly reduces intra-class variation. Third, we further propose a novel ETP-entropy Sampling (E2 S) to collect class-balanced high-confidence samples, which leverages the early-time training phenomenon into LNL. Extensive experiments on several benchmarks show that the proposed CDT achieves 88.2%, 71.6%, and 81.3% accuracies on Office-31, Office-Home, and VisDA-17, respectively, which outperforms state-of-the-art methods.
更多
查看译文
关键词
Domain adaptation,Adversarial learning,Noisy label
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要