Protecting Private Inputs: Bounded Distortion Guarantees With Randomised Approximations

Authors: Patrick Ah-Fat (Imperial College London), Michael Huth (Imperial College London)

Volume: 2020
Issue: 3
Pages: 284–303


Download PDF

Abstract: Computing a function of some private inputs while maintaining the confidentiality of those inputs is an important problem, to which Differential Privacy and Secure Multi-party Computation can offer solutions under specific assumptions. Research in randomised algorithms aims at improving the privacy of such inputs by randomising the output of a computation while ensuring that large distortions of outputs occur with low probability. But use cases such as e-voting or auctions will not tolerate large distortions at all. Thus, we develop a framework for randomising the output of a privacypreserving computation, while guaranteeing that output distortions stay within a specified bound. We analyse the privacy gains of our approach and characterise them more precisely for our notion of sparse functions. We build randomisation algorithms, running in linearithmic time in the number of possible input values, for this class of functions and we prove that the computed randomisations maximise the inputs’ privacy. Experimental work demonstrates significant privacy gains when compared with existing approaches that guarantee distortion bounds, also for non-sparse functions.

Keywords: Quantitative information flow, utility metric

Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 license.