학술논문

Membership Inference Attacks against Diffusion Models
Document Type
Conference
Source
2023 IEEE Security and Privacy Workshops (SPW) SPW Security and Privacy Workshops (SPW), 2023 IEEE. :77-83 May, 2023
Subject
Computing and Processing
Resistance
Schedules
Privacy
Analytical models
Conferences
Closed box
Machine learning
diffusion model
membership inference attack
GAN
hyperparameter
privacy
Language
ISSN
2770-8411
Abstract
Diffusion models have attracted attention in recent years as innovative generative models. In this paper, we investigate whether a diffusion model is resistant to a membership inference attack, which evaluates the privacy leakage of a machine learning model. We primarily discuss the diffusion model from the standpoints of comparison with a generative adversarial network (GAN) as conventional models and hyperparameters unique to the diffusion model, i.e., timesteps, sampling steps, and sampling variances. We conduct extensive experiments with DDIM as a diffusion model and DCGAN as a GAN on the CelebA and CIFAR-10 datasets in both white-box and black-box settings and then show that the diffusion model is comparably resistant to a membership inference attack as GAN. Next, we demonstrate that the impact of timesteps is significant and intermediate steps in a noise schedule are the most vulnerable to the attack. We also found two key insights through further analysis. First, we identify that DDIM is vulnerable to the attack for small sample sizes instead of achieving a lower FID. Second, sampling steps in hyperparameters are important for resistance to the attack, whereas the impact of sampling variances is quite limited.