Regularizing Label-Augmented Generative Adversarial Networks Under Limited Data.

IEEE Access(2023)

引用 0|浏览5
暂无评分
摘要
Training generative adversarial networks (GANs) using limited training data is challenging since the original discriminator is prone to overfitting. The recently proposed label augmentation technique complements categorical data augmentation approaches for discriminator, showing improved data efficiency in training GANs but lacks a theoretical basis. In this paper, we propose a novel regularization approach for the label-augmented discriminator to further improve the data efficiency of training GANs with a theoretical basis. Specifically, the proposed regularization adaptively constrains the predictions of the label-augmented discriminator on generated data to be close to the moving averages of its historical predictions on real data, and vice versa. We theoretically establish a connection between the objective function with the proposed regularization and a f-divergence that is more robust than the previous reversed Kullback-Leibler divergence. Experimental results on various datasets and diverse architectures show the significantly improved data efficiency of our proposed method compared to state-of-the-art data-efficient GAN training approaches for training GANs under limited training data regimes.
更多
查看译文
关键词
Generative adversarial networks,limited data,adaptive regularization,label augmentation,data augmentation,self-supervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要