Dual-Consistency Self-Training For Unsupervised Domain Adaptation.

ICIP(2021)

引用 0|浏览7
暂无评分
摘要
Unsupervised domain adaptation (UDA) is a challenging task characterized by unlabeled target data with domain discrepancy to labeled source data. Many methods have been proposed to learn domain invariant features by marginal distribution alignment, but they ignore the intrinsic structure within target domain, which may lead to insufficient or false alignment. Class-level alignment has been demonstrated to align the features of the same class between source and target domains. These methods rely extensively on the accuracy of predicted pseudo-labels for target data. Here, we develop a novel self-training method that focuses more on accurate pseudo-labels via a dual-consistency strategy involving modelling the intrinsic structure of the target domain. The proposed dual-consistency strategy first improves the accuracy of pseudo-labels through voting consistency, and then reduces the negative effects of incorrect predictions through structure consistency with the relationship of intrinsic structures across domains. Our method has achieved comparable performance to the state-of-the-arts on three standard UDA benchmarks.
更多
查看译文
关键词
Consistency,Self-training,Unsupervised Domain Adaptation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要