Unintended Memorization and Timing Attacks in Named Entity Recognition Models

Authors: Rana Salal Ali (Macquarie University), Benjamin Zi Hao Zhao (Macquarie University), Hassan Jameel Asghar (Macquarie University), Tham Nguyen (Macquarie University), Ian David Wood (Macquarie University), Mohamed Ali Kaafar (Macquarie University)

Volume: 2023
Issue: 2
Pages: 329–346
DOI: https://doi.org/10.56553/popets-2023-0056

Download PDF

Abstract: Named entity recognition models (NER), are widely used for identifying named entities (e.g., individuals, locations, and other information) in text documents. Machine learning based NER models are increasingly being applied in privacy-sensitive applications that need automatic and scalable identification of sensitive information to redact text for data sharing. In this paper, we study the setting when NER models are available as a black-box service for identifying sensitive information in user documents and show that these models are vulnerable to membership inference on their training datasets. With updated pre-trained NER models from spaCy, we demonstrate two distinct membership attacks on these models. Our first attack capitalizes on unintended memorization in the NER's underlying neural network, a phenomenon NNs are known to be vulnerable to. Our second attack leverages a timing side-channel to target NER models that maintain vocabularies constructed from the training data. We show that different functional paths of words within the training dataset in contrast to words not previously seen have measurable differences in execution time. Revealing membership status of training samples has clear privacy implications. For example, in text redaction, sensitive words or phrases to be found and removed, are at risk of being detected in the training dataset. Our experimental evaluation includes the redaction of both password and health data, presenting both security risks and a privacy/regulatory issues. This is exacerbated by results that indicate memorization after only a single phrase. We achieved a 70% AUC in our first attack on a text redaction use-case. We also show overwhelming success in the second timing attack with an 99.23% AUC. Finally we discuss potential mitigation approaches to realize the safe use of NER models in light of the presented privacy and security implications of membership inference attacks.

Keywords: Natural Language Processing, Named Entity Recognition, Membership Inference, Timing Attack, Text Redaction

Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.