SoK: Truncation Untangled: Scaling Fixed-Point Arithmetic for Privacy-Preserving Machine Learning to Large Models and Datasets
Authors: Christopher Harth-Kitzerow (Technical University of Munich, BMW Group), Ajith Suresh (Technology Innovation Institute, Abu Dhabi), Georg Carle (Technical University of Munich)
Volume: 2025
Issue: 4
Pages: 369–391
DOI: https://doi.org/10.56553/popets-2025-0135
Abstract: Fixed Point Arithmetic (FPA) is widely used in Privacy-Preserving Machine Learning (PPML) to efficiently handle decimal values. However, repeated multiplications in FPA can lead to overflow, as the fractional part doubles in size with each multiplication. To address this, truncation is applied post-multiplication to maintain precision. Various truncation schemes based on Secure Multiparty Computation (MPC) exist, but trade-offs between accuracy and efficiency in PPML models and datasets remain underexplored. In this work, we analyze and consolidate different truncation approaches from the MPC literature. We conduct the first large-scale systematic evaluation of PPML inference accuracy across truncation schemes, ring sizes, neural network architectures, and datasets. Our study provides clear guidelines for selecting the optimal truncation scheme and parameters for PPML inference. All evaluations are implemented in the open-source HPMPC MPC framework, facilitating future research and adoption. Beyond our large scale evaluation, we also present improved constructions for each truncation scheme, achieving up to a fourfold reduction in communication and round complexity over existing schemes. Additionally, we introduce optimizations tailored for PPML, such as strategically fusing different neural network layers. This leads to a mixed-truncation scheme that balances truncation costs with accuracy, eliminating communication overhead in the online phase while matching the accuracy of plaintext floating-point PyTorch inference for VGG-16 on the ImageNet dataset.
Keywords: Fixed-point arithmetic, MPC, PPML, Truncation, Secure Inference
Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.
