arXiv Analytics

Sign in

arXiv:2104.03310 [cs.LG]AbstractReferencesReviewsResources

Regularizing Generative Adversarial Networks under Limited Data

Hung-Yu Tseng, Lu Jiang, Ce Liu, Ming-Hsuan Yang, Weilong Yang

Published 2021-04-07Version 1

Recent years have witnessed the rapid progress of generative adversarial networks (GANs). However, the success of the GAN models hinges on a large amount of training data. This work proposes a regularization approach for training robust GAN models on limited data. We theoretically show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data. Extensive experiments on several benchmark datasets demonstrate that the proposed regularization scheme 1) improves the generalization performance and stabilizes the learning dynamics of GAN models under limited training data, and 2) complements the recent data augmentation methods. These properties facilitate training GAN models to achieve state-of-the-art performance when only limited training data of the ImageNet benchmark is available.

Comments: CVPR 2021. Project Page: Code:
Categories: cs.LG, cs.CV
Related articles: Most relevant | Search more
arXiv:1901.09113 [cs.LG] (Published 2019-01-25)
Generative Adversarial Networks for Black-Box API Attacks with Limited Training Data
arXiv:2102.04002 [cs.LG] (Published 2021-02-08)
Meta Discovery: Learning to Discover Novel Classes given Very Limited Data
arXiv:1906.02646 [cs.LG] (Published 2019-06-06)
Energy Predictive Models with Limited Data using Transfer Learning