Mode Regularized Generative Adversarial Networks.

international conference on learning representations(2017)

引用 673|浏览255
暂无评分
摘要
Although Generative Adversarial Networks achieve state-of-the-art results on avariety of generative tasks, they are regarded as highly unstable and prone to missmodes. We argue that these bad behaviors of GANs are due to the very particularfunctional shape of the trained discriminators in high dimensional spaces, whichcan easily make training stuck or push probability mass in the wrong direction,towards that of higher concentration than that of the data generating distribution.We introduce several ways of regularizing the objective, which can dramaticallystabilize the training of GAN models. We also show that our regularizers can helpthe fair distribution of probability mass across the modes of the data generatingdistribution during the early phases of training, thus providing a unified solutionto the missing modes problem.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要