Dual contrastive universal adaptation network for multi-source visual recognition

Knowledge-Based Systems(2022)

引用 3|浏览28
暂无评分
摘要
This paper explores recently proposed Universal Multi-source Domain Adaptation (UniMDA) task. UniMDA task is different from existing domain adaptation (DA) tasks, e.g., Multi-source DA, Closed set DA or Universal DA. UniMDA not only handles multi-source issue, but also needs no prior knowledge about the overlap between the target and source label sets. UniMDA task has three challenges: (i) Domain shift issue among the multiple source domains. (ii) Domain shift issue between target and source domains. (iii) Category shift between the target and each source. Towards tackling the challenges, we formulate a universal multi-source adaptation network termed as Multi-Source Dual Contrastive Network (MSDCN), including a transferability rule and contrastive module. In addition, to handle multi-source scenario, we propose (1) pairwise similarities maximization over the examples from multiple source domains, and (2) alternative optimization strategy for training the ensemble of multiple source classifiers end-to-end. The proposed method can handle UniMDA scenario generally, where the label set in each source domain may be different from the target domain, while maintaining the complexity of the method in different DA scenarios. Experiments are conducted on several real-world multi-source benchmarks. The results show that MSDCN could work stably, and exceed the state-of-the-art performance against existing domain adaptation algorithms.
更多
查看译文
关键词
Image classification,Contrastive learning,Universal multi-source domain adaptation,Deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要