Disentangled Image Generation for Unsupervised Domain Adaptation.

ECCV Workshops(2020)

引用 0|浏览61
暂无评分
摘要
We explore the use of generative modeling in unsupervised domain adaptation (UDA), where annotated real images are only available in the source domain, and pseudo images are generated in a manner that allows independent control of class (content) and nuisance variability (style). The proposed method differs from existing generative UDA models in that we explicitly disentangle the content and nuisance features at different layers of the generator network. We demonstrate the effectiveness of (pseudo)-conditional generation by showing that it improves upon baseline methods. Moreover, we outperform the previous state-of-the-art with significant margins in recently introduced multi-source domain adaptation (MSDA) tasks, achieving significant error reduction rates of \(50.27 \%\), \(89.54 \%\), \(75.35 \%\), \(27.46 \%\) and \(94.3 \%\) in all 5 tasks.
更多
查看译文
关键词
unsupervised domain adaptation,image generation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要