arXiv Analytics

Sign in

arXiv:2302.01316 [cs.CV]AbstractReferencesReviewsResources

Are Diffusion Models Vulnerable to Membership Inference Attacks?

Jinhao Duan, Fei Kong, Shiqi Wang, Xiaoshuang Shi, Kaidi Xu

Published 2023-02-02Version 1

Diffusion-based generative models have shown great potential for image synthesis, but there is a lack of research on the security and privacy risks they may pose. In this paper, we investigate the vulnerability of diffusion models to Membership Inference Attacks (MIAs), a common privacy concern. Our results indicate that existing MIAs designed for GANs or VAE are largely ineffective on diffusion models, either due to inapplicable scenarios (e.g., requiring the discriminator of GANs) or inappropriate assumptions (e.g., closer distances between synthetic images and member images). To address this gap, we propose Step-wise Error Comparing Membership Inference (SecMI), a black-box MIA that infers memberships by assessing the matching of forward process posterior estimation at each timestep. SecMI follows the common overfitting assumption in MIA where member samples normally have smaller estimation errors, compared with hold-out samples. We consider both the standard diffusion models, e.g., DDPM, and the text-to-image diffusion models, e.g., Stable Diffusion. Experimental results demonstrate that our methods precisely infer the membership with high confidence on both of the two scenarios across six different datasets

Related articles: Most relevant | Search more
arXiv:2006.07084 [cs.CV] (Published 2020-06-12)
A Face Preprocessing Approach for Improved DeepFake Detection
arXiv:1908.02671 [cs.CV] (Published 2019-08-07)
Dual-reference Age Synthesis
arXiv:2105.00490 [cs.CV] (Published 2021-05-02)
Residual Enhanced Multi-Hypergraph Neural Network