arXiv Analytics

Sign in

arXiv:2003.11512 [cs.CV]AbstractReferencesReviewsResources

Improved Techniques for Training Single-Image GANs

Tobias Hinz, Matthew Fisher, Oliver Wang, Stefan Wermter

Published 2020-03-25Version 1

Recently there has been an interest in the potential of learning generative models from a single image, as opposed to from a large dataset. This task is of practical significance, as it means that generative models can be used in domains where collecting a large dataset is not feasible. However, training a model capable of generating realistic images from only a single sample is a difficult problem. In this work, we conduct a number of experiments to understand the challenges of training these methods and propose some best practices that we found allowed us to generate improved results over previous work in this space. One key piece is that unlike prior single image generation methods, we concurrently train several stages in a sequential multi-stage manner, allowing us to learn models with fewer stages of increasing image resolution. Compared to a recent state of the art baseline, our model is up to six times faster to train, has fewer parameters, and can better capture the global structure of images.

Comments: Code available at https://github.com/tohinz/ConSinGAN
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:1811.07548 [cs.CV] (Published 2018-11-19, updated 2019-04-22)
iQIYI-VID: A Large Dataset for Multi-modal Person Identification
Yuanliu Liu et al.
arXiv:2007.15484 [cs.CV] (Published 2020-07-30)
Learning from Few Samples: A Survey
arXiv:1502.02160 [cs.CV] (Published 2015-02-07)
A Survey on Hough Transform, Theory, Techniques and Applications