Adam in Private: Secure and Fast Training of Deep Neural Networks with Adaptive Moment Estimation
Authors: Nuttapong Attrapadung (AIST), Koki Hamada (NTT), Dai Ikarashi (NTT), Ryo Kikuchi (NTT), Takahiro Matsuda (AIST), Ibuki Mishina (NTT), Hiraku Morita (University of St. Gallen), Jacob C. N. Schuldt (AIST)
Volume: 2022
Issue: 4
Pages: 746–767
DOI: https://doi.org/10.56553/popets-2022-0131
Abstract: Machine Learning (ML) algorithms, especially deep neural networks (DNN), have proven themselves to be extremely useful tools for data analysis, and are increasingly being deployed in systems operating on sensitive data, such as recommendation systems, banking fraud detection, and healthcare systems. This underscores the need for privacy-preserving ML (PPML) systems, and has inspired a line of research into how such systems can be constructed efficiently. However, most prior works on PPML achieve efficiency by requiring advanced ML algorithms to be simplified or substituted with approximated variants that are “MPC-friendly” before multi-party computation (MPC) techniques are applied to obtain a PPML systems. A drawback of this approach is that it requires careful fine-tuning of the combined ML and MPC algorithms, and might lead to less efficient algorithms or inferior quality ML (such as lower prediction accuracy). This is an issue for secure training of DNNs in particular, as this involves several arithmetic algorithms that are thought to be “MPCunfriendly”, namely, integer division, exponentiation, inversion, and square root extraction. In this work, we take a structurally different approach and propose a framework that allows efficient and secure evaluation of full-fledged state-of-the-art ML algorithms via secure multi-party computation. Specifically, we propose secure and efficient protocols for the above seemingly MPC-unfriendly computations (but which are essential to DNN). Our protocols are three-party protocols in the honest-majority setting, and we propose both passively secure and actively secure with abort variants. A notable feature of our protocols is that they simultaneously provide high accuracy and efficiency. This framework enables us to efficiently and securely compute modern ML algorithms such as Adam (Adaptive moment estimation) and the softmax function “as is”, without resorting to approximations. As a result, we obtain secure DNN training that outperforms state-of-the-art threeparty systems; our full training is up to 6.7 times faster than just the online phase of FALCON (Wagh et al. at PETS’21) and up to 4.2 times faster than Dalskov et al. (USENIX’21) on the standard benchmark network for secure training of DNNs. The potential advantage of our approach is even greater when considering more complex realistic networks. To demonstrate this, we perform measurements on real-world DNNs, AlexNet and VGG16, which are large networks containing millions of parameters. The performance of our framework for these networks is up to a factor of 26 ∼ 33 faster for AlexNet and 48 ∼ 51 faster for VGG16 to achieve an accuracy of 60% and 70%, respectively, when compared to FALCON. Even compared to CRYPTGPU (Tan et al. IEEE S&P’21), which is optimized for and runs on powerful GPUs, our framework achieves a factor of 2.1 and 4.1 faster performance, respectively, on these networks.
Keywords: MPC, fixed-point arithmetic, deep learning
Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 license.