Efficient Privacy-Preserving Machine Learning with Lightweight Trusted Hardware
Authors: Pengzhi Huang (Cornell University), Thang Hoang (Virginia Tech), Yueying Li (Cornell University), Elaine Shi (Carnegie Mellon University), G. Edward Suh (NVIDIA / Cornell University)
Volume: 2024
Issue: 4
Pages: 327–348
DOI: https://doi.org/10.56553/popets-2024-0119
Abstract: In this paper, we propose a new secure machine learning inference platform assisted by a small dedicated security processor, which will be easier to protect and deploy compared to today's TEEs integrated into high-performance processors. Our platform provides three main advantages over the state-of-the-art: (i) We achieve significant performance improvements compared to state-of-the-art distributed Privacy-Preserving Machine Learning (PPML) protocols, with only a small security processor that is comparable to a discrete security chip such as the Trusted Platform Module (TPM) or on-chip security subsystems in SoCs similar to the Apple enclave processor. In the semi-honest setting with WAN/GPU, our scheme is 4X-63X faster than Falcon (PoPETs'21) and AriaNN (PoPETs'22) and 3.8X-12X more communication efficient. We achieve even higher performance improvements in the malicious setting. (ii) Our platform guarantees security with abort against malicious adversaries under honest majority assumption. (iii) Our technique is not limited by the size of secure memory in a TEE and can support high-capacity modern neural networks like ResNet18 and Transformer. While previous work investigated the use of high-performance TEEs in PPML, this work represents the first to show that even tiny secure hardware with very limited performance can be leveraged to significantly speed-up distributed PPML protocols if the protocol can be carefully designed for lightweight trusted hardware.
Keywords: multi-party computation, secure hardware, machine learning
Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.