arXiv Analytics

Sign in

arXiv:2106.07873 [cs.CV]AbstractReferencesReviewsResources

Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Generated Images

Vishal Asnani, Xi Yin, Tal Hassner, Xiaoming Liu

Published 2021-06-15Version 1

State-of-the-art (SOTA) Generative Models (GMs) can synthesize photo-realistic images that are hard for humans to distinguish from genuine photos. We propose to perform reverse engineering of GMs to infer the model hyperparameters from the images generated by these models. We define a novel problem, "model parsing", as estimating GM network architectures and training loss functions by examining their generated images -- a task seemingly impossible for human beings. To tackle this problem, we propose a framework with two components: a Fingerprint Estimation Network (FEN), which estimates a GM fingerprint from a generated image by training with four constraints to encourage the fingerprint to have desired properties, and a Parsing Network (PN), which predicts network architecture and loss functions from the estimated fingerprints. To evaluate our approach, we collect a fake image dataset with $100$K images generated by $100$ GMs. Extensive experiments show encouraging results in parsing the hyperparameters of the unseen models. Finally, our fingerprint estimation can be leveraged for deepfake detection and image attribution, as we show by reporting SOTA results on both the recent Celeb-DF and image attribution benchmarks.

Related articles: Most relevant | Search more
arXiv:2102.10543 [cs.CV] (Published 2021-02-21)
Do Generative Models Know Disentanglement? Contrastive Learning is All You Need
arXiv:2003.01872 [cs.CV] (Published 2020-03-04)
Type I Attack for Generative Models
arXiv:1805.06605 [cs.CV] (Published 2018-05-17)
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models