PrivDNN: A Secure Multi-Party Computation Framework for Deep Learning using Partial DNN Encryption

Authors: Liangqin Ren (The University of Kansas), Zeyan Liu (The University of Kansas), Fengjun Li (The University of Kansas), Kaitai Liang (Delft University of Technology), Zhu Li (University of Missouri--Kansas City), Bo Luo (The University of Kansas)

Volume: 2024
Issue: 3
Pages: 477–494
DOI: https://doi.org/10.56553/popets-2024-0089

Artifact: Reproduced

Download PDF

Abstract: In the past decade, we have witnessed an exponential growth of deep learning models, platforms, and applications. While existing DL applications and Machine Learning as a service (MLaaS) frameworks assume fully trusted models, the need for privacy-preserving DNN evaluation arises. In a secure multi-party computation scenario, both the model and the data are considered proprietary, i.e., the model owner does not want to reveal the highly valuable DL model to the user, while the user does not wish to disclose their private data samples either. Conventional privacy-preserving deep learning solutions ask the users to send encrypted samples to the model owners, who must handle the heavy lifting of ciphertext-domain computation with homomorphic encryption. In this paper, we present a novel solution, namely, PrivDNN, which (1) offloads the computation to the user side by sharing an encrypted deep learning model with them, (2) significantly improves the efficiency of DNN evaluation using partial DNN encryption, (3) ensures model accuracy and model privacy using a core neuron selection and encryption scheme. Experimental results show that PrivDNN reduces privacy-preserving DNN inference time and memory requirement by up to 97% while maintaining model performance and privacy. Codes can be found at https://github.com/LiangqinRen/PrivDNN

Keywords: Privacy-preserving Deep Learning, Homomorphic Encryption

Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.