TuiGAN: Learning Versatile Image-to-Image Translation with Two Unpaired Images
European Conference on Computer Vision(2020)
摘要
An unsupervised image-to-image translation (UI2I) task deals with learning a mapping between two domains without paired images. While existing UI2I methods usually require numerous unpaired images from different domains for training, there are many scenarios where training data is quite limited. In this paper, we argue that even if each domain contains a single image, UI2I can still be achieved. To this end, we propose TuiGAN, a generative model that is trained on only two unpaired images and amounts to one-shot unsupervised learning. With TuiGAN, an image is translated in a coarse-to-fine manner where the generated image is gradually refined from global structures to local details. We conduct extensive experiments to verify that our versatile method can outperform strong baselines on a wide variety of UI2I tasks. Moreover, TuiGAN is capable of achieving comparable performance with the state-of-the-art UI2I models trained with sufficient data.
更多查看译文
关键词
Image-to-Image Translation,Generative adversarial network,One-shot unsupervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络