Defending Against Microphone-Based Attacks with Personalized Noise

Authors: Yuchen Liu (Indiana University Bloomington), Ziyu Xiang (Stanford University (This work was conducted while at Indiana University Bloomington)), Eun Ji Seong (Indiana University Bloomington), Apu Kapadia (Indiana University Bloomington), Donald S. Williamson (Indiana University Bloomington)

Volume: 2021
Issue: 2
Pages: 130–150
DOI: https://doi.org/10.2478/popets-2021-0021

Download PDF

Abstract: Voice-activated commands have become a key feature of popular devices such as smartphones, home assistants, and wearables. For convenience, many people configure their devices to be ‘always on’ and listening for voice commands from the user using a trigger phrase such as “Hey Siri,” “Okay Google,” or “Alexa.” However, false positives for these triggers often result in privacy violations with conversations being inadvertently uploaded to the cloud. In addition, malware that can record one’s conversations remains a significant threat to privacy. Unlike with cameras, which people can physically obscure and be assured of their privacy, people do not have a way of knowing whether their microphone is indeed off and are left with no tangible defenses against voice based attacks. We envision a general-purpose physical defense that uses a speaker to inject specialized obfuscating ‘babble noise’ into the microphones of devices to protect against automated and human based attacks. We present a comprehensive study of how specially crafted, personalized ‘babble’ noise (‘MyBabble’) can be effective at moderate signalto-noise ratios and can provide a viable defense against microphone based eavesdropping attacks.

Keywords: privacy; audio; microphones; obfuscating; noise

Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 license.