arXiv Analytics

Sign in

arXiv:2104.07661 [cs.CV]AbstractReferencesReviewsResources

A Simple Baseline for StyleGAN Inversion

Tianyi Wei, Dongdong Chen, Wenbo Zhou, Jing Liao, Weiming Zhang, Lu Yuan, Gang Hua, Nenghai Yu

Published 2021-04-15Version 1

This paper studies the problem of StyleGAN inversion, which plays an essential role in enabling the pretrained StyleGAN to be used for real facial image editing tasks. This problem has the high demand for quality and efficiency. Existing optimization-based methods can produce high quality results, but the optimization often takes a long time. On the contrary, forward-based methods are usually faster but the quality of their results is inferior. In this paper, we present a new feed-forward network for StyleGAN inversion, with significant improvement in terms of efficiency and quality. In our inversion network, we introduce: 1) a shallower backbone with multiple efficient heads across scales; 2) multi-layer identity loss and multi-layer face parsing loss to the loss function; and 3) multi-stage refinement. Combining these designs together forms a simple and efficient baseline method which exploits all benefits of optimization-based and forward-based methods. Quantitative and qualitative results show that our method performs better than existing forward-based methods and comparably to state-of-the-art optimization-based methods, while maintaining the high efficiency as well as forward-based methods. Moreover, a number of real image editing applications demonstrate the efficacy of our method. Our project page is ~\url{https://wty-ustc.github.io/inversion}.

Related articles: Most relevant | Search more
arXiv:1904.05876 [cs.CV] (Published 2019-04-11)
A Simple Baseline for Audio-Visual Scene-Aware Dialog
arXiv:2002.10964 [cs.CV] (Published 2020-02-25)
Freeze Discriminator: A Simple Baseline for Fine-tuning GANs
arXiv:2004.01888 [cs.CV] (Published 2020-04-04)
A Simple Baseline for Multi-Object Tracking