Privacy Preserving Feature Selection for Sparse Linear Regression

Authors: Adi Akavia (University of Haifa), Ben Galili (Technion), Hayim Shaul (IBM Research), Mor Weiss (Bar-Ilan University), Zohar Yakhini (Reichman University and Technion)

Volume: 2024
Issue: 1
Pages: 300–313
DOI: https://doi.org/10.56553/popets-2024-0017

Download PDF

Abstract: Privacy-Preserving Machine Learning (PPML) provides protocols for learning and statistical analysis of data that may be distributed amongst multiple data owners (e.g., hospitals that own proprietary healthcare data), while preserving data privacy. The PPML literature includes protocols for various learning methods, including ridge regression. Ridge regression controls the L2 norm of the model, but does not aim to strictly reduce the number of non-zero coefficients, namely the L0 norm of the model. Reducing the number of non-zero coefficients (a form of feature selection) is important for avoiding overfitting, and for reducing the cost of using learnt models in practice. In this work, we develop a first privacy-preserving protocol for sparse linear regression under L0 constraints. The protocol addresses data contributed by several data owners (e.g., hospitals). Our protocol outsources the bulk of the computation to two non-colluding servers, using homomorphic encryption as a central tool. We provide a rigorous security proof for our protocol, where security is against semi-honest adversaries controlling any number of data owners and at most one server. We implemented our protocol, and evaluated performance with nearly a million samples and up to 40 features.

Keywords: Privacy preserving machine learning, sparse linear regression, feature selection, secure multiparty computation, homomorphic encryption

Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.