Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks

2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2017)

引用 1821|浏览716
暂无评分
摘要
Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.
更多
查看译文
关键词
unsupervised pixel-level domain adaptation,generative adversarial network,well-annotated image datasets,modern machine learning algorithms,ground-truth annotations,rendered images,unsupervised domain adaptation algorithms,map representations,pixel space,source-domain images,unsupervised domain adaptation scenarios,adaptation process
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要