Are Neuromorphic Architectures Inherently Privacy-preserving? An Exploratory Study

Authors: Ayana Moshruba (George Mason University), Ihsen Alouani (Centre for Secure Information Technologies (CSIT), Queen's University Belfast), Maryam Parsa (George Mason University)

Volume: 2025
Issue: 2
Pages: 243–257
DOI: https://doi.org/10.56553/popets-2025-0060

Download PDF

Abstract: While machine learning (ML) models are becoming mainstream, including in critical application domains, concerns have been raised about the increasing risk of sensitive data leakage. Various privacy attacks, such as membership inference attacks (MIAs), have been developed to extract data from trained ML models, posing significant risks to data confidentiality. While the predominant work in the ML community considers traditional Artificial Neural Networks (ANNs) as the default neural model, neuromorphic architectures, such as Spiking Neural Networks (SNNs), have recently emerged as an attractive alternative mainly due to their significantly low power consumption. These architectures process information through discrete events, i.e., spikes, to mimic the functioning of biological neurons in the brain. While the privacy issues have been extensively investigated in the context of traditional ANNs, they remain largely unexplored in neuromorphic architectures, and little work has been dedicated to investigating their privacy-preserving properties. In this paper, we investigate the question of whether SNNs have inherent privacy-preserving advantages. Specifically, we investigate SNNs’ privacy properties through the lens of MIAs across diverse datasets, in comparison with ANNs. We explore the impact of different learning algorithms (surrogate gradient and evolutionary learning), programming frameworks (snnTorch, TENNLab, and LAVA), and various parameters on the resilience of SNNs against MIA. Our experiments reveal that SNNs demonstrate consistently superior privacy preservation compared to ANNs, with evolutionary algorithms further enhancing their resilience. For example, on the CIFAR-10 dataset, SNNs achieve an AUC as low as 0.59 compared to 0.82 for ANNs, and on CIFAR-100, SNNs maintain a low AUC of 0.58, whereas ANNs reach 0.88. Furthermore, we investigate the privacy-utility trade-off through Differentially Private Stochastic Gradient Descent (DPSGD), observing that SNNs incur a notably lower accuracy drop than ANNs under equivalent privacy constraints.

Keywords: Neuromorphic Architectures, Spiking Neural Networks, Privacy-preserving Machine Learning, Membership Inference Attacks, Differential Privacy

Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.