Disparate Vulnerability in Link Inference Attacks against Graph Neural Networks

Authors: Da Zhong (Stevens Institute of Technology), Ruotong Yu (University of Utah), Kun Wu (Stevens Institute of Technology), Xiuling Wang (Stevens Institute of Technology), Jun Xu (University of Utah), Wendy Hui Wang (Stevens Institute of Technology)

Volume: 2023
Issue: 4
Pages: 149–169
DOI: https://doi.org/10.56553/popets-2023-0103


Download PDF

Abstract: Graph Neural Networks (GNNs) have been widely used in various graph-based applications. Recent studies have shown that GNNs are vulnerable to link-level membership inference attacks (LMIA) which can infer whether a given link was included in the training graph of a GNN model. While most of the studies focus on the privacy vulnerability of the links in the entire graph, none have inspected the privacy risk of specific subgroups of links (e.g., links between LGBT users). In this paper, we present the first study of disparity in subgroup vulnerability (DSV) of GNNs against LMIA. First, with extensive empirical evaluation, we demonstrate the existence of non-negligible DSV under various settings of GNN models and input graphs. Second, by both statistical and causal analysis, we identify the difference between three specific graph structural properties of subgroups as one of the underlying reasons for DSV. Among the three properties, the difference between subgroup density has the largest causal effect on DSV. Third, inspired by the causal analysis, we design a new defense mechanism named FairDefense to mitigate DSV while providing protection against LMIA. At a high level, at each iteration of target model training, FairDefense randomizes the membership of edges in the training graph with a given probability, aiming to reduce the gap between the density of different subgroups for DSV mitigation. Our empirical results demonstrate that FairDefense outperforms the existing defense methods in the trade-off between defense and target model accuracy. More importantly, it offers better DSV mitigation.

Keywords: Membership inference attacks, Graph Neural Networks, fair privacy

Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.