White-box Membership Inference Attacks against Diffusion Models

Authors: Yan Pang (University of Virginia), Tianhao Wang (University of Virginia), Xuhui Kang (University of Virginia), Mengdi Huai (Iowa State University), Yang Zhang (CISPA Helmholtz Center for Information Security)

Volume: 2025
Issue: 2
Pages: 398–415
DOI: https://doi.org/10.56553/popets-2025-0068

Download PDF

Abstract: Diffusion models have begun to overshadow GANs and other generative models in industrial applications due to their superior image generation performance. The complex architecture of these models furnishes an extensive array of attack features. In light of this, we aim to design membership inference attacks (MIAs) catered to diffusion models. We first conduct an exhaustive analysis of existing MIAs on diffusion models, taking into account factors such as black-box/white-box models and the selection of attack features. We found that white-box attacks are highly applicable in real-world scenarios, and the most effective attacks presently are white-box. Departing from earlier research, which employs model loss as the attack feature for white-box MIAs, we employ model gradients in our attack, leveraging the fact that these gradients provide a more profound understanding of model responses to various samples. We subject these models to rigorous testing across a range of parameters, including training steps, timestep sampling frequency, diffusion steps, and data variance. Across all experimental settings, our method consistently demonstrated near-flawless attack performance, with attack success rate approaching 100% and attack AUCROC near 1.0. We also evaluated our attack against common defense mechanisms, and observed our attacks continue to exhibit commendable performance.

Keywords: Membership Inference Attack, Diffusion Model

Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.