arXiv Analytics

Sign in

arXiv:1708.02511 [cs.LG]AbstractReferencesReviewsResources

Adversarial Divergences are Good Task Losses for Generative Modeling

Gabriel Huang, Gauthier Gidel, Hugo Berard, Ahmed Touati, Simon Lacoste-Julien

Published 2017-08-08Version 1

Generative modeling of high dimensional data like images is a notoriously difficult and ill-defined problem. In particular, how to evaluate a learned generative model is unclear. In this paper, we argue that adversarial learning, pioneered with generative adversarial networks (GANs), provides an interesting framework to implicitly define more meaningful task losses for unsupervised tasks, such as for generating "visually realistic" images. By unifying GANs and structured prediction under the framework of statistical decision theory, we put into light links between recent advances in structured prediction theory and the choice of the divergence in GANs. We argue that the insights about the notions of "hard" and "easy" to learn losses can be analogously extended to adversarial divergences. We also discuss the attractive properties of adversarial divergences for generative modeling, and perform experiments to show the importance of choosing a divergence that reflects the final task.

Comments: 10 pages, workshop paper for PADL ICML 2017 workshop
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1905.09894 [cs.LG] (Published 2019-05-23)
PHom-GeM: Persistent Homology for Generative Models
arXiv:1904.01083 [cs.LG] (Published 2019-04-01)
DeepCloud. The Application of a Data-driven, Generative Model in Design
arXiv:2107.02732 [cs.LG] (Published 2021-07-06)
Provable Lipschitz Certification for Generative Models