{ "id": "1706.08500", "version": "v1", "published": "2017-06-26T17:45:23.000Z", "updated": "2017-06-26T17:45:23.000Z", "title": "GANs Trained by a Two Time-Scale Update Rule Converge to a Nash Equilibrium", "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Günter Klambauer", "Sepp Hochreiter" ], "comment": "15 pages (+ 46 pages appendix)", "categories": [ "cs.LG" ], "abstract": "Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent that has an individual learning rate for both the discriminator and the generator. We prove that the TTUR converges under mild assumptions to a stationary Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the \"Fr\\'echet Inception Distance\" (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs, improved Wasserstein GANs, and BEGANs, outperforming conventional GAN training on CelebA, Billion Word Benchmark, and LSUN bedrooms.", "revisions": [ { "version": "v1", "updated": "2017-06-26T17:45:23.000Z" } ], "analyses": { "keywords": [ "time-scale update rule converge", "frechet inception distance", "stochastic gradient descent", "prefers flat minima", "popular adam optimization" ], "note": { "typesetting": "TeX", "pages": 15, "language": "en", "license": "arXiv", "status": "editable" } } }