Face-Off: Adversarial Face Obfuscation

Authors: Varun Chandrasekaran (University of Wisconsin–Madison), Chuhan Gao (Microsoft, work done while at University of Wisconsin–Madison), Brian Tang (University of Wisconsin–Madison), Kassem Fawaz (University of Wisconsin–Madison), Somesh Jha (University of Wisconsin–Madison), Suman Banerjee (University of Wisconsin–Madison)

Volume: 2021
Issue: 2
Pages: 369–390
DOI: https://doi.org/10.2478/popets-2021-0032

artifact

Download PDF

Abstract: Advances in deep learning have made face recognition technologies pervasive. While useful to social media platforms and users, this technology carries significant privacy threats. Coupled with the abundant information they have about users, service providers can associate users with social interactions, visited places, activities, and preferences–some of which the user may not want to share. Additionally, facial recognition models used by various agencies are trained by data scraped from social media platforms. Existing approaches to mitigate associated privacy risks result in an imbalanced trade-off between privacy and utility. In this paper, we address this trade-off by proposing Face-Off, a privacypreserving framework that introduces strategic perturbations to images of the user’s face to prevent it from being correctly recognized. To realize Face-Off, we overcome a set of challenges related to the black-box nature of commercial face recognition services, and the scarcity of literature for adversarial attacks on metric networks. We implement and evaluate Face-Off to find that it deceives three commercial face recognition services from Microsoft, Amazon, and Face++. Our user study with 423 participants further shows that the perturbations come at an acceptable cost for the users.

Keywords: face recognition, privacy

Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 license.