Dynamic Latent Scale for GAN Inversion.

International Conference on Pattern Recognition Applications and Methods (ICPRAM)(2022)

引用 1|浏览4
暂无评分
摘要
When the latent random variable of GAN is an i.i.d. random variable, the encoder trained with mean squared error loss to invert the generator does not converge because the generator loses the information of the latent random variable. In this paper, we introduce a dynamic latent scale GAN, a method for training a generator that does not lose the information of the latent random variable, and an encoder that inverts the generator. Dynamic latent scale GAN dynamically scales each element of the latent random variable during GAN training to adjust the entropy of the latent random variable. As training progresses, the entropy of the latent random variable decreases until the generator does not lose the information of the latent random variable, which enables the encoder trained with squared error loss to converge. The scale of the latent random variable is approximated by tracing the element-wise variance of the predicted latent random variable from previous training steps. Since the scale of latent random variable changes dynamically, the encoder should be trained with the generator during GAN training. The encoder can be integrated with the discriminator, and the loss for the encoder is added to the generator loss for fast training.
更多
查看译文
关键词
Generative Adversarial Network,Feature Learning,Representation Learning,GAN Inversion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要