Hawk: Accurate and Fast Privacy-Preserving Machine Learning Using Secure Lookup Table Computation
Authors: Hamza Saleem (University of Southern California), Amir Ziashahabi (University of Southern California), Muhammad Naveed (University of Southern California), Salman Avestimehr (University of Southern California)
Volume: 2024
Issue: 3
Pages: 42–58
DOI: https://doi.org/10.56553/popets-2024-0066
Abstract: Training machine learning models on data from multiple entities without direct data sharing can unlock applications otherwise hindered by business, legal, or ethical constraints. In this work, we design and implement new privacy-preserving machine learning protocols for logistic regression and neural network models. We adopt a two-server model where data owners secret-share their data between two servers that train and evaluate the model on the joint data. A significant source of inefficiency and inaccuracy in existing methods arises from using Yao’s garbled circuits to compute non-linear activation functions. We propose new methods for computing non-linear functions based on secret-shared lookup tables, offering both computational efficiency and improved accuracy. Beyond introducing leakage-free techniques, we initiate the exploration of relaxed security measures for privacy-preserving machine learning. Instead of claiming that the servers gain no knowledge during the computation, we contend that while some information is revealed about access patterns to lookup tables, it maintains epsilon-dX-privacy. Leveraging this relaxation significantly reduces the computational resources needed for training. We present new cryptographic protocols tailored to this relaxed security paradigm and define and analyze the leakage. Our evaluations show that our logistic regression protocol is up to 9x faster, and the neural network training is up to 688x faster than SecureML. Notably, our neural network achieves an accuracy of 96.6% on MNIST in 15 epochs, outperforming prior benchmarks that capped at 93.4% using the same architecture.
Keywords: secure multi-party computation, privacy-preserving ML
Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.