Private Sampling with Identifiable Cheaters

In this paper we study verifiable sampling from probability distributions in the context of multi-party computation. This has various applications in randomized algorithms performed collaboratively by parties not trusting each other. One example is differentially private machine learning where noise should be drawn, typically from a Laplace or Gaussian distribution, and it is desirable that no party can bias this process. In particular, we propose algorithms to draw random numbers from uniform, Laplace, Gaussian and arbitrary probability distributions, and to verify honest execution of the protocols through zero-knowledge proofs. We propose protocols that result in one party knowing the drawn number and protocols that deliver the drawn random number as a shared secret.


INTRODUCTION
Nowadays, randomization is an important algorithmic technique. Its numerous applications include randomized algorithms, e.g., for many problems the simplest or most efficient known solution strategy is a randomized algorithm, and hiding information, e.g., in cryptography or in differential privacy. While true randomness is hard to achieve in most cases it is sufficient to be able to generate pseudo-random numbers. A wide range of approaches exist to generate pseudo-random numbers of good quality.
The situation becomes more complicated when we consider generating random numbers in the context of multi-party computation between parties which do not trust each other. We are particularly interested in algorithms which allow multiple parties to draw a random number from a specified probability distribution in such a way that all parties can be convinced that the number drawn is truly random and that either all parties, only one party, or none of the parties learn the drawn random number. This implies that no party should be able to influence the probability distribution or be able to predict or guess the random number. Such algorithms are particularly useful for differentially private federated machine learning using sensitive data from multiple data owners. In this setting, one would like to learn a statistical model M with parameters θ on the sensitive data of multiple data owners. Such model could reveal sensitive information and therefore one possible technique is to perturb the model before publication sufficiently such that it becomes differentially private [27], i.e., such that from the perturbed modelM with parametersθ one cannot distinguish a change in a single individual. This can be achieved by drawing some noise η from an appropriate probability distribution, e.g., η often is a vector of Laplace or Gaussian random variables, and settingθ = θ +η. In such scenario it is important nobody knows η as else that party could subtract η from the publishedθ to obtain the sensitive model parameters θ . At the same time, all data owners want to be sure that η is drawn correctly: if anyone can bias the distribution of this noise, privacy may not be guaranteed anymore or the model parameters may be biased in a way similar as what one can achieve with data poisoning [49,53].
In this paper we develop algorithms to verifiably draw random numbers. We consider uniform distributions, Laplace distributions, Gaussian distributions and arbitrary distributions. We develop strategies with three different privacy levels for the random number: strategies which verifiably draw a publicly known random number, strategies which verifiably draw a random number which is revealed to only one party and strategies which verifiably draw a random number and output it as a shared secret so that none of the parties knows the random number.
An important tool to prove correct behavior can be found in zero knowledge proofs (ZKP). These are cryptographic techniques that allow a party to prove statements without revealing anything else. Typically, one considers statements involving logical and arithmetic relations over private values which can be expressed using additions, multiplications and other elementary operations such as comparisons. For drawing from Laplace or Gaussian distributions, transcendental functions are needed. We work towards bridging this gap based on Cordic [52], a classic technique for computing such functions.
The main contributions of this paper can be summarized as follows: (1) we propose strategies to prove relationships involving logarithms or trigonometric functions in zero knowledge, (2) we propose and compare several strategies to let a party verifiably draw Gaussian random numbers, (3) we propose algorithms to let one party verifiably sample from the Laplace distribution and from an arbitrary distribution, (4) we propose algorithms to draw from the Gaussian or Laplace distribution a random number represented as a shared secret.
The remainder of this paper is structured as follows: After reviewing some common notations and concepts in Section 2, we formalize our problem statement in Section 3. Next, in Section 4 we discuss related work and in Section 5 we provide a high-level overview of our method. After that, in Section 6 we review the Cordic algorithm and adapt it for zero-knowledge proofs. In Sections 7 and 8 we apply these techniques for sampling from the Laplace and Gaussian distributions. To show how our methods work in practice, in Section 9 we provide an experimental comparison of the several possible strategies to sample from the Gaussian distribution. In Section 10 we discuss the application of our techniques to the problem of differentially private machine learning. Finally, in Section 11 we conclude and outline directions of future work.

PRELIMINARIES
We will denote the set of the first k positive integers by [k] = {i ∈ N | 1 ≤ i ≤ k}. We denote the security parameter by λ. We say that a function is negligible in λ if, for each positive polynomial f , it is smaller than 1 f (λ) for sufficiently big λ. We omit λ sometimes in negligible functions when it is clear it refers to λ. a ← R S means that a is sampled uniformly at random from elements of S. For vectors a = (a 1 , . . . , a k ) and b = (b 1 , . . . , b k ), a + b and a * b are the element wise addition and product. a b is the multi-exponentiation k i=1 a b i i . For a scalar s, s + a = (s, . . . , s) + a, s * a = (s, . . . , s) * a and a s = a (s,...,s) . The function sign(x) is equal to 1 if x ≥ 0 and to −1 otherwise.

Setting and Threat Model
We consider a set of n parties P = {P 1 , . . . , P n }. We assume parties have access to a public-key infrastructure which they can use to prove their identity when sending messages. Parties communicate through secure channels and have access to a public bulletin board that they can use to post messages. When a party sends a message to the bulletin board, it is forwarded to all other agents as when using a broadcast channel. In addition, all broadcasted messages remain publicly visible in the bulletin board, which allows to have publicly verifiable protocols.
We assume that a subset of parties P cor ⊂ P is corrupted and controlled by an adversary A. A can make corrupted parties to deviate arbitrarily from the protocol and perform coordinated attacks. Our protocols are secure if at least one party is honest. The set P cor of corrupted parties is assumed to be static, meaning that it does not change after the beginning of the execution.
We prove security in the simulation paradigm, using the model of security with identifiable abort [33] for the stand-alone setting [29]. This setting allows to obtain sequentially composable protocols in which parties are able to detect malicious actions and can in such cases abort the protocol. Deterrence measures may be in place to discourage parties from being detected as malicious. In fact, unless parties stop participating our protocols either complete successfully or abort with a proof that a specific party is a cheater, i.e., in case our protocols abort the message trace (which is kept on the bulletin board) allows for proving that a specific party did not follow the protocol. Assuming that adversaries will be deterred if they risk getting caught is a standard assumption that applies in many scenarios [3].
In parts of our protocols, we make use of specific Zero Knowledge Proofs that are non-interactive versions of compressed Σ-protocols whose security relies on the Random Oracle Model [6]. We provide a detailed description of our security framework and prove the security of our protocols in Appendix B.

Commitment Schemes
A commitment scheme allows for committing to values while keeping them hidden. We use the vector variant of the Pedersen commitment scheme [47]. Let G be a cyclic multiplicative group of prime order p exponential in λ in which the Discrete Logarithm Assumption (DLA) holds. The setup of the commitment scheme takes as input a string of length O(1 λ ) and outputs a vector g = (д 1 , . . . , д k ) of elements sampled at random from G \ {1}. It is required that no pairwise discrete logarithm on the elements д 1 , . . . , д k is known, which can be guaranteed without a trusted party as the setup only requires public randomness. A commitment P ∈ G of a vector x = (x 1 , . . . , x k ) ∈ Z k p satisfies P = g x . We say that x is an opening of P. The scheme is binding as no computationally bounded adversary can find two openings x and x ′ of P such that x x ′ except with probability negligible in λ. If x = (x, r ), wherex is the data and r is sampled uniformly at random from Z p , P is uniformly distributed in G and therefore does not reveal any information aboutx. This is known as the hiding property. In our protocols, one coordinate of g is always reserved for randomness. The scheme is also homomorphic as, given commitments P and Q of x and y respectively, PQ is a commitment of x + y.

Arithmetic Circuits
An arithmetic circuit (or just circuit) C : Z k p → Z s p is a function that only contains additions and multiplications modulo p. In the following sections we will define circuits using the notation C(a; i 1 , . . . , i k ) is the output, and a are constants that may change the circuit structure, for example, in the case we are defining a family of similar circuits.

Compressed Σ-Protocols
We will prove statements about private committed values using Zero Knowledge Proofs [31]. In such proofs, for a NP relation R, a prover P interacts with a verifier V to prove, for a public statement a, the knowledge of a private witness w such that (a; w) ∈ R. At the end of the interaction, V either accepts or rejects the proof. ZKPs are (1) complete, as V always accepts a proof of an honest P, (2) sound, as a proof of a dishonest P is rejected except with negligible probability and (3) zero knowledge, as no information other than (a; w) ∈ R is revealed by the protocol. The ZKPs that we use are also called zero knowledge arguments, as they are sound if P is computationally bounded. Additionally, they rely on the DLA.
The ZKPs we use are called compressed Σ-protocols [2]. Particularly, we use Protocol Π cs of [2], that proves the nullity of the output of arithmetic circuits in Z p applied to private inputs. Let G, Z p , and g be as defined for commitments, then for any circuit C : Z k p → Z s p , by applying Π cs to C we obtain a complete, sound and zero knowledge proof for the relation While Π cs is an interactive protocol between P and V, it can be turned into a non-interactive proof using the Strong Fiat-Shamir heuristic [8]. By this transformation, ZKPs can be generated offline by P and later be verified by any party. Let m be the number of multiplication gates of C, then the proof generated by the execution of Π cs has a size of 2⌈log(k +2m +4)⌉ −1 elements of G and 6 elements of Z p . To generate such proof, the dominant computations are modular exponentiations in G (GEX). P performs 5k + 8m + 2⌈log 2 (k + 2m + 4)⌉ + 6 GEX, and the verification cost is of k + 2m + 2⌈log 2 (k + 2m + 4)⌉ − 1 GEX.
We provide a detailed explanation of compressed Σ-protocols, their cost and some optimizations in Appendix A.

Secret Sharing
Consider again n parties {P i } n i=1 . For a positive prime p, group Z p , and a number a ∈ Z p , one can generate an additive secret share for a by drawing a random vector (a 1 , . . . , a n ) ∈ Z n p subject to the constraint that n i=1 a i = a mod p. We then denote this sharing of a as ⟦a⟧ = (a 1 . . . a n ). The process of computing and revealing a from the sharing ⟦a⟧ is called opening the sharing ⟦a⟧. If every party P i only receives a i (for i ∈ [n]), then if not all parties collude each party can only see at most n − 1 uniformly randomly distributed numbers, and hence has no information about the value of a.
If for one or more values a sharing is available, it is possible to perform various operations on them without revealing any new information, see [24] for an overview. If ⟦a⟧ = (a 1 . . . a n ) is a sharing of a and ⟦b⟧ is a sharing of b, then ⟦a + b⟧ = (a 1 +b 1 . . . a n +b n ) is a sharing of a+b. Given a sharing ⟦a⟧ of a and a public constant c, then ⟦ca⟧ = (ca 1 . . . ca n ) is a sharing of ca. For multiplying two sharings, one can use pre-computed triples of sharings (⟦x⟧, ⟦y⟧, ⟦z⟧) with x and y random and xy = z. Given such triple, and two sharings ⟦a⟧ and ⟦b⟧ which one wants to multiply, one can compute ⟦d⟧ = ⟦a⟧ − ⟦x⟧ and ⟦e⟧ = ⟦b⟧ − ⟦y⟧ and open both ⟦d⟧ and ⟦e⟧. Then, a sharing of c = ab is obtained by ⟦c⟧ = ⟦z⟧+d⟦y⟧+e⟦x⟧+de. Several approaches have been proposed to generate such triples of sharings (⟦a⟧, ⟦b⟧, ⟦c⟧) efficiently, typically involving a somewhat homomorphic encryption (SHE) scheme with distributed decryption, where the parties can generate random sharings ⟦a⟧ and ⟦b⟧ uniformly at random, encrypt them, multiply them and decrypt the product in a distributed way to obtain ⟦c⟧ [24].
We'll adopt a number of ideas from [23]. In particular, we will represent sharings in binary form, denoting by BIT S(x, ( To implement it, let c (−1) = 0. For i = 0 . . . l −1:

Random Numbers
A (secure) pseudo-random number generator (PRG) is a function G : for some function µ negligible in k. In other words, a PRG is a function which takes a string x as input and outputs a longer string G(x) which cannot be distinguished from a random sequence by a polynomial time algorithm.

PROBLEM STATEMENT
We call π a sampling protocol over a domain X if π is a randomized multi-party protocol which outputs sequences of elements of X. We consider sampling protocols which take only one input per party at the beginning of the protocol. In particular, let be the set of n parties which participate to a sampling protocol π , and let s i be the input (also called seed) of party P i (for i ∈ [n]). We denote the output of π by π (s) where s = (s i ) n i=1 is the vector of seeds. We assume that there is some increasing polynomial . . , s |s | ) denote the vector s without the i-th component.
Definition 1 (Correct Sampling). For a multi-party protocol π , we say a party is honest if it follows the steps of protocol π correctly and does not collude with other parties. We say that a sampling protocol π correctly samples from a probability distribution D if there is a function µ with µ(k) negligible in k such that for every run of π by parties P = {P i } n i=1 among which there is at least one i ∈ [n] such that party P i is honest, for every s −i ∈ {0, 1} k ×(n−1) , for any probabilistic polynomial time algorithm A : {0, 1} k (n−1) × X p(k ) → {0, 1}, there holds that either π finishes correctly and or π aborts and detects a party that attempted to cheat, where D p(k ) draws vectors from X p(k ) whose components are independently distributed according to D.
In other words, if there is at least one honest party, then π acts as a (generalized) PRG even if all parties except that honest party would disclose their seeds. As a result, as soon as a single party is honest it can trust that any output of π used by any party is pseudorandom and no party could predict it in advance. We denote the fact that x is correctly drawn from D by x ← * R D. We say a protocol π verifiably samples from D if π correctly samples from D and after every execution of π the value of x is uniquely defined given the union of the information obtained by all parties and the information published by π is sufficient to convince any party that x has been correctly drawn. We denote the fact that x is verifiably drawn from D by x ← V R D.
In this paper, we will often informally consider both discrete and continuous probability distributions, and P D then either represents a probability mass or probability density according to the context. As computers work with finite precision, we will eventually discretize up to some parameter-defined precision. While in the end all distributions will be discrete, we will use the continuous representations whenever this simplifies the explanation.
In the sequel, unless made explicit otherwise, we'll assume there are n parties among whom at least one is honest, and that D is a publicly agreed probability distribution. Also, to simplify the explanation we will often describe protocols generating just one random number, the extension to streams of random numbers is then straightforward.
We can distinguish several types of verifiable sampling protocols, depending on how they output the sampled number x. For a verifiable sampling protocol π , we say it is a • public draw if after running π the value of x is published.
• private draw if after running π exactly one party knows x, but the other parties have no information on x next to the prior distribution D. • hidden draw if after running π the parties have received an additive secret share (x 1 , . . . , x n ) for x, but still no party has any information about x next to the prior distribution D.
In this paper, we study the problem of finding efficient verifiable sampling protocols of each of the three above types given the probability distribution D.
This problem is reasonably straightforward if D is the uniform distribution over the integers in the interval [0, L) for some L > 0: Protocol 1 (Public uniform sampling). For each i ∈ [n] let party P i generate its own random number r i uniformly distributed over [0, L) from its own secret seed s i and publish a commitment C i to it. Then, all parties open their commitment, i.e., they publish r i and the randomness associated to the commitment to prove that C i was a commitment to r i . Finally, all parties compute publicly n i=1 r i mod L.
It is easy to see Protocol 1 draws r verifiably: if at least one party P i is honest, it has generated a uniformly distributed number r i and r is also uniformly distributed because non-honest parties P j cannot change their r j as a function of other parties because they start with a commitment on their r j . Note that Protocol 1 is a generalization for multiple parties of [10]. We present the protocol in more detail and prove its security in Appendix B.3.
Protocol 2 (Private uniform sampling). One can sample a vector of k numbers private to P 1 as follows: P 1 draws uniformly at random a vector a = (a 1 , . . . , a k ) ∈ [0, L) k and publishes a vector commitment C to it. Then, all parties generate jointly a public random number r ∈ [0, L) with Protocol 1. P 1 expands r to random numbers (r 1 , . . . , r k ) ∈ [0, L) k using a PRG. Finally, for i ∈ [k], P 1 computes u i = a i + r i mod L and performs a zero knowledge proof of the modular sum for each u i . (u 1 , . . . , u k ) is a vector of private uniform random numbers.
Again, it is easy to see that (u 1 , . . . , u k ) is drawn verifiably. This protocol has many aspects in common with the Augmented Coin-Tossing protocol defined in [29,Section 7.4.3.5]. We provide a proof of the security of Protocol 2 in Appendix B.4. Protocol 3 (Hidden uniform sampling). For each i ∈ [n] let party P i generate its own random number r i uniformly distributed over [0, L) and publish a commitment C i to it. Then, they consider (r 1 , . . . , r n ) as a secret share of the random number r = n i=1 r i mod L.
After running Protocol 3, if there is a honest party, r is fixed and follows the right probability distribution, and as not all parties collude no party knows more about r than that it follows the uniform distribution over [0, L).
The problem of finding efficient verifiable sampling protocols becomes more challenging when D is not the uniform distribution, but a more general distribution such as a normal distribution or a Laplace distribution. Even for single party computation there sometimes exist multiple approaches with varying cost and precision.

RELATED WORK
Below we describe lines of work that are related to ours.
Multiparty Computation Between Unreliable Participants. The seminal work of [10] proposed the first protocol to sample a public random bit (i.e. tossing a coin) between two parties that do not trust each other. Subsequent works such as [13] proposed protocols to perform coin tossing between an arbitrary number of parties.
The work of [20] proved that in the malicious model without aborts it is impossible that a multiparty protocol is guaranteed to finish correctly and perform an unbiased coin toss if the number of malicious users is half or more of the total of participants. For such cases, there is no other possibility than providing weaker security guarantees. In the framework of malicious security with abort [29], protocols either end correctly or are aborted by malicious parties. This could lead to bias in the computations if a protocol is restarted after an abort and the adversary speculatively chooses when allow the protocol to finish correctly. To prevent this, a possible solution is to identify and punish malicious parties that cause aborts. The work of [3] proposes covert security, where cheating adversaries can get caught with certain probability. This is weaker than malicious security with abort, but allows cheaters to be detected. A stronger notion is malicious security with identifiable abort [33], where a party that cheats causes the protocol to abort with overwhelming probability and, in addition, the cheater is identified. Our work fits in that framework.
If deterrence measures are strong enough, this could be sufficient to discourage malicious behavior. Otherwise, if corrupted peers are willing to sacrifice themselves at any cost, other measures can be taken to attenuate the bias as much as possible [5,43].
The work [30] proposes a method to securely perform a wide family of randomized computations (related to interactive games) over private data and private random numbers, using zero knowledge proofs to verify correctness. They prove that this is secure in the ideal paradigm without abort if the majority of parties is honest.
Sampling From Gaussians and Other Popular Non-Uniform Distributions. Distributions such as the Gaussian distribution, the Laplace distribution, the Poisson distribution or the exponential distribution are important in the field of statistics. Algorithms to securely draw from such distributions have applications in federated machine learning. Several contributions concern the problem of verifiable noise for differential privacy [35,50] and hence can benefit from secure drawing.
Even in the semi-honest model where parties follow the specified protocol, drawing hidden random numbers is sometimes non-trivial. For example, in [19] one needs to make a sum of statistics and a Laplacian-distributed noise term, hence the authors propose a protocol where parties generate random numbers summing to a Laplacian distributed value which can then be included in a secure aggregation without being revealed.
In [26], protocols are proposed to generate secret-shared samples for Gaussian, Exponential and Poisson distributions. For the Gaussian distribution, their approach generates samples by averaging uniform seeds, a method which we call the Central Limit Theorem (CLT) approach. We compare the CLT approach with our approaches in Section 9. Even if more than a decade has passed since [26], recent contributions still resort to these techniques to generate Gaussian samples among unreliable participants. For example, recent protocols use the technique of [26] by adapting it to generate private draws from the Exponential distribution [45] and to sample hidden draws from the Binomial distribution [9]. The work of [41] proposes techniques to securely sample from the geometrical and Gaussian distributions, both building on [26], and studies them in the light of differentially private memory access patterns. In addition, [41] defines an extension of the malicious security model which includes information leakage, as measured in differential privacy, and proves the security of their protocols within this model.
In our work, we propose new techniques for privately drawing from the Gaussian distribution and show that all our techniques for all but the lowest precision requirements outperform the technique of [26], which is the most efficient method known so far. The same dynamics are at play for exponential, Poisson and Laplace distributions. Compared to the techniques in [26], our methods have a better complexity as a function of the precision parameter. We extend our methodology to hidden draws of Gaussian, Laplacian and Arbitrary distributions. Achieving sufficient precision when sampling is important for both the statistical quality and the security of the algorithms [42].

Implementation of Math Functions Using Cryptographic Primitives.
Using secret sharing techniques, there is a large body of work on how to compute math functions such as square roots, logarithms and trigonometric [1,4,25,32,32,37]. However they usually rely on splines or other approximation techniques that approximate functions by splitting the domain and using low-degree polynomials for each part. Alternatively, they rely on rational approximations. Piecewise approximations require the use of conditionals which are expensive when computing with secret shares, and rational approximations only allow for a fixed precision. Our work uses iterative approximations which allow to customize the precision of the approximation and are easy to compute given that we avoid the use of comparison gates in our circuits. Furthermore, for the Gaussian distribution, piecewise approximations require an external method to sample from the tails of the distribution. We also show protocols for private sampling from Gaussian and Laplace distributions where we avoid the high cost of secret shared computation by letting one party perform the calculation and then prove correct behavior using compressed Σ-protocols.
Zero Knowledge Proofs for such functions, as we apply in our work, is a less explored technique. [54] proposes techniques to prove a limited set of relations involving common activation functions in machine learning.

METHOD
We start with discussing two generic approaches: a strategy based on the inverse cumulative probability distribution and a strategy based on table lookup.

Inverse Cumulative Probability Distribution
Assume D is a probability distribution on X ⊆ R. The cumulative probability distribution is defined as To the extent D is discrete, we can see P D as a sum of scaled Dirac delta functions over which integration is possible and results in a sum. Then, the inverse F −1 D is a function on the interval (0, 1). An approach to sampling from arbitrary distributions D on domains X ⊆ R, known as the inversion method, consists of sampling uniformly from the (0, 1) interval and applying the inverse of the cumulative distribution function Public Sampling From an Arbitrary Distribution. This approach can easily be applied to draw random numbers publicly: Protocol 4 (Public draw from arbitrary distribution). Run protocol 1 to generate a public uniformly distributed random number r ′ , and then publicly compute r = F −1 D (r ′ ).
Using the inversion method for private or hidden draws is more involved since one needs a multi-party algorithm to compute F −1 D or a ZKP algorithm to prove to other parties that F −1 D was applied correctly. In many practical cases, F −1 D does not have a simple closed form. This especially holds for the Gaussian distribution which we will discuss in more detail in Section 8.
We can extend this method to multi-variate distributions. For example, consider a distribution D over R 2 . To sample a pair (x, y) according to D, we first define P x (x) = ∫ P D (x, y) dy, apply the inversion method to draw a random number x according to P x , and then define P y |x (y) = P D (y|x) = P D (x, y)/P x (x) and apply again the inversion method to draw a random y.

Table Lookup
As pointed out above, practical inverse cumulative probability functions are often expensive to compute, especially in a secure multiparty setting. In such scenarios approaches such as the ones discussed in Sections 5.1 and 8 incur a high cost for each drawn random number. In this section we consider an approach based on table lookup. While the involved techniques are well-known, this approach is interesting as a baseline, especially as it has a number of properties which are different from the other methods considered in this paper. In particular, the method studied here has a high pre-processing cost but then allows for drawing random numbers at a low constant cost per drawn random number.
• Preprocessing. Let M ∈ N. The parties publicly pre-compute for all i ∈ [M] and store them into a database DB.
• Sampling. Party P 1 privately draws using Protocol 2 a random number , publishes commitments to r ′ and r , and publishes a ZKP that (r ′ , r ) ∈ DB.
In Protocol 5 a zero knowledge set membership proof is needed. There is a large body of work on this topic since in [15] the first method was shown that has a large preprocessing cost (linear in M) but only a unit communication cost for proving membership. Several improvements have been proposed which vary in their assumptions and efficiency, [7] discusses some lines of recent work.
Only already storing the database DB may take a prohibitive amount of space if a high precision is needed, as M is exponential in the number of desired correct digits. As a result, this technique can only be used when the needed precision is not too high. If it is feasible, it is expensive for drawing only a few random numbers but it can become more efficient than other methods if a huge number of random numbers need to be drawn, as asymptotically the cost per sample will dominate.

Laplace Distribution
The Laplace distribution, denoted Lap(b) is defined by

The cumulative distribution is
To sample a number r from Lap(b) it is convenient to separately draw the sign s and absolute value a of r . Then, P(s = −1) = P(s = 1) = 1/2 and P(a) = 1 b exp − a b and P(a ≤ t) = 1 − exp − t b . In Section 7 we will describe protocols for both private and hidden Laplace-distributed draws.
We will sometimes use the shorthand P N = P N(0,1) . The cumulative distribution is where erf is the error function. There is no closed form for P N , F N nor its inverse. In the single party setting multiple strategies have been investigated to sample from this important distribution: • the Central Limit Theorem (CLT) approach, which consists of sampling repeatedly from a uniform distribution and computing the average, which is simple but requires O(1/∆ 2 ) time for a root mean squared error ∆, • the Box-Müller method [12], that can obtain two Gaussian numbers from two uniform samples by the application of a closed form formula, but involves the computation of a square root, trigonometric functions and a logarithm, • rejection sampling methods, such as the polar version of Box-Müller [36] or the Ziggurat Method [40] are efficient and highly accurate. While the former avoids the computation of trigonometric functions and leads to an efficient verifiable implementation, the latter uses several conditional branches which are expensive to prove in zero knowledge and requires an external method for sampling in the tails of the distribution, • the inversion method for Gaussians that involves the approximation of the inverse error function erf −1 , which can be done with rational functions or Taylor polynomials, and • the recursive method of Wallace [51], which is very popular for its efficiency, but requires as input a vector of already generated Gaussian samples to generate an output vector of the same size; furthermore, samples from input and output vectors are correlated, which deteriorates the statistical quality.
Before studying some of these in the multi-party setting, we will first provide Σ-protocols of relations involving approximations of certain elementary functions.

PROOFS OF ELEMENTARY FUNCTIONS
In this section, we construct zero knowledge proofs of statements that involve the approximation of elementary functions, i.e. sine, cosine, natural logarithm and square root. These functions can be numerically approximated using basic operations such as addition and multiplication. While classic cryptographic tools are used to prove statements over integers, we operate with real numbers which we approximate with fixed precision. Therefore, we use representations of integer multiples of 2 −ψ by multiplying our values with 2 ψ and rounding them deterministically to obtain elements of Z p . Negative numbers are represented in the upper half of Z p . For example, the number a < 0 is represented with p + 2 ψ a. The set of representable numbers is denoted by which is closed under addition and multiplication modulo p (rounded up to 2 −ψ ). The encoding of v ∈ Q ⟨p,ψ ⟩ is denoted by ⟨v⟩ = 2 ψ v mod p.
We show circuits such that the nullity of their output is equivalent to the statements we want to prove. We will first construct circuits to describe low level statements and then use these as building blocks for higher level statements. In the end, we apply compressed Σ-protocols (see Section 2.4) to produce zero knowledge proofs of these circuits. For parameters (a; b) of all circuits defined below, a always contains public constants and b private values.
We present in Section 6.1 circuits for proving various types of simple statements. In Section 6.2, we introduce Cordic, the core approximation algorithm. We implement circuits to prove its correct execution in Section 6.3, and details on how to expand its domain of application, particularly for our sampling techniques, in Section 6.4.

Building Blocks
We introduce below proofs of basic statements that we will use to prove approximations, including the handling of some statements of numbers in Q ⟨p,ψ ⟩ . Note that additions, multiplication by an integer and range proofs port directly to Z p by our encoding ⟨·⟩. In Appendix A.5, we show that to prove that an integer x ∈ Z p belongs to [0, 2 k ) we can use the circuit The second expression evaluates to 0 if x is indeed the correct bit map of x. Hence, the nullity of the circuit, i.e., its righthandside evaluating to the zero vector, proves x ∈ [0, 2 k ).
Generalized Range Proof. C Ra can be used twice to prove membership in any range (Right) Bit-Shift. For a ∈ Q ⟨p,ψ ⟩ and an integer k > 0, a bit shift a >> k is equal to the biggest value in Q ⟨p,ψ ⟩ smaller than a/2 k .
For the vector s of the bit decomposition of ⟨a⟩ − 2 k ⟨b⟩, the circuit is Note that in the definition of C >> , as in all subsequent circuits, the evaluation of inputs to sub-circuits such as C Ra are computations performed within the circuit.
Approximate Product. For private a, b, c ∈ Q ⟨p,ψ ⟩ , it can be proven that c is the rounding of ab, that is by proving that Approximate Division. For private a, b, c ∈ Q ⟨p,ψ ⟩ , we prove that c is approximately a/b with error 2 −ψ . We also require that b ∈ [A, B] for public A, B ∈ Q ⟨p,ψ ⟩ . We prove that ⟨b⟩⟨c⟩ −2 ψ ⟨a⟩ + ⟨b⟩ ∈ [0, 2⟨b⟩). Our range proofs require that the bounds are public, so we Exponentiation in Z p . Let y, x ∈ Z p be private values with x ∈ [0, 2 k ) and E ∈ Z p a public integer such that E x < p/2. We prove . Modular Sum. We prove, for private x, z ∈ Z p and public y ∈ Z p such that all belong to [0, M), that z = x + y mod M. Let x 1 , x 2 , z 1 and z 2 vectors of intermediate values for C GRa , and let b ∈ {0, 1}, our circuit is Ideas for C I Ex and C Mod are taken from pages 112-115 of [16].
Private Magnitude Shift. Here, we prove that y = x >> k for public K and private k ≤ K. Let k, k ′ , k ′′ ∈ {0, 1} K and h ∈ Z p be intermediate values for range proofs and integer exponentiations and

Cordic Algorithm
We use the Cordic algorithm [52] for approximations, which has long been state of the art for computations of elementary functions from simple operations [44]. Essentially, it uses the same core iteration algorithm, which only uses additions and bit-shifts, for all elementary function approximations. We will use Cordic parameterized for two settings described below, the first is used for sine and cosine and the second for square root and logarithm. In what follows, we only provide an algorithmic description of the Cordic algorithm as is needed in order to understand our extension to the zero knowledge setting in Section 6.3.
Setting 2 (ln(x) and √ x). In this setting Cordic only takes two inputs X 0 , Y 0 ∈ Q ⟨p,ψ ⟩ and, with θ 0 = 0, it performs the iterations x +1 ) that θ ν is an approximation of 1 2 ln(x) with error at most 2 1−F ν . Similarly, for the same ψ and domain of x, √ x can be obtained by setting with error at most 2 1−F ν .

Cordic in Zero Knowledge
We first specify a set of statements that together are equivalent to a correctly performed Cordic computation. Note that the iterations in Settings 1 and 2 are very similar. Except for the correctness of the ξ i values, they can be described by equations where the constants in Setting 1 are m = 1, To prove the correct value of the ξ i 's, we avoid wide range checks at each iteration (on θ − θ i or Y i−1 ), but instead we use properties of the convergence of Cordic: all of in Setting 1, and in Setting 2.
We outline below the circuits that imply the above statements. Let S ∈ {1, 2} be the Cordic setting that defines the involved constants. Let ξ * = (ξ 1 , . . . , ξ ν ) and let be the vector of all intermediate values. The nullity of circuit , 1} F i are auxiliary bit vectors to prove bit shifts of X i and Y i with result X ′ i−1 and Y ′ i−1 respectively. We complete the above core circuit for Setting 1. Let proves eqs. (2) to (6). Similarly, we extend the core circuit to a complete one for setting 2: for s Y equal to the bit decomposition of Y ν , proves eqs. (2) to (5) and (7).

Extending the Domain
Here we extend the domain of approximations, which is necessary for our sampling applications. We sometimes do not define inputs such as bit vectors for range proofs and other intermediate values that are clear from the context or that are already defined in previous circuits.
Sine and Cosine. As shown, sine and cosine can be approximated in − π 2 , π 2 . For Q ∈ {1, 2, 3, 4} we use the identity extend the domain to [0, 2π ]. Let s π , s ′ π be bit vectors as needed for C GRa , and I T д = (⟨θ ′ ⟩, ⟨s ′ ⟩, ⟨c ′ ⟩, Q, s π , s ′ π , I T ). Let and for i ∈ {1, 2, 3, 4} be literal variable replacements. Circuit Natural Logarithm. We extend the domain of ln(x) to (0, 1). For x ′ ∈ [ 1 2 , 1) and non-negative integer e such that x = 2 −e x ′ ∈ (0, 1). We prove that l = ln(x ′ )−e ln(2) = ln(x). Let I Lд = (e, h, e, s x ′ , ⟨x ′ ⟩, ⟨l ′ ⟩, I L ), then C LoдG (ψ , ν ; ⟨x⟩, ⟨l⟩, I Lд ) Square Root. Now, for a public bound B > 0 and a private x ∈ [0, B], we prove that s = √ x. Let γ = ⌊log 2 (B)⌋ + 1. We choose x ′ ∈ [ 1 2 , 1) and an integer e ∈ [−ψ , γ ] such that x = 2 e x ′ , and we have that We break the proof in several circuits to handle different cases. For that we use bit variables as flags to decide which computation will be proven. Let n e ∈ {0, 1} be the "negativity flag" of e and e ′ ≥ 0 such that e = (1 − n e )e ′ . Let i e ∈ {0, 1} be the "parity flag" of e, such that e = 2f − i e for an integer f . We also define f ′ ≥ 0 such that f = (1 − n e )f ′ . We first handle the relations between x, x ′ , s = √ x and s ′ = x ′ /(1 + i e ) when e is non-negative, or equivalently, when n e = 0. Let I D1 = (l, f, I >> ), then our circuit is Similarly, for I D2 = (h, e, I ′ >> ) the case when e is negative is described by While the circuit above is easier to read, the practical implementation contains a number of further optimizations to reduce the number of multiplications. In particular, additional variables are introduced to avoid multiplying flags such as i e with larger vectors such as the output of a C Ra circuit. This introduces additional variables, e.g.,

THE LAPLACE DISTRIBUTION 7.1 Private Laplace Sampling
Protocol 6 (private drawing from Laplace). First, party P 1 privately draws s 0 and a ′ uniformly at random in [0, L) with L sufficiently large (Protocol 2). Then, P 1 computes a = −b log(1 − a ′ /L), s = 2(s 0 mod 2) − 1 and r = sa, and provides a ZKP for these relations (in Section 6.3 we showed a ZKP for the logarithm function, in Section 6.1 for approximate division).
As Protocol 2 verifiably draws random numbers uniformly and for the other computations in Protocol 6 a ZKP is provided, Protocol 6 verifiably draws random numbers from the Lap(b) distribution.
An alternative method could be based on work by [26] (see also [19] for related ideas). In particular, [26] proposes a technique to sample directly from the exponential distribution using a range of independent biased coin flips. The main advantage of our method is that we only need one uniformly sampled public random number, which strongly reduces the communication cost.
This remark also holds for the protocol of Laplace hidden draws which we will present in Section 7.2 below.

Hidden Laplace Sampling
We can also make hidden draws from the Laplace distribution, i.e., drawing a Laplace-distributed random number r as a secret share ⟦r ⟧. For this, we build on the basic operations discussed in Section 2.5.
First, we observe that one can sample a sign s uniformly from {−1, 1} as follows: the parties apply Protocol 3 to draw a secret shared random number uniformly distributed in Z p , obtaining the sharing ⟦t⟧, next they multiply the sharing with itself to obtain t 2 and open t 2 to reveal t 2 , and finally they multiply ⟦t⟧ with the public constant 1/ √ t 2 to obtain ⟦s⟧ = t/ √ t 2 ∈ {−1, 1}. Drawing a secret shared random bit b ∈ {0, 1} is then just drawing a sign ⟦s⟧ and computing ⟦b⟧ = (⟦s⟧ + 1) /2 (this is protocol RAN 2 in [23]). The Cordic algorithm for logarithm computation described in Section 6.2 requires only additions, bit shifts and comparisons (when setting ξ i = sign (−Y i−1 )). While it does not directly use multiplications, implementations of bit operations and comparisons, e.g., as in [23], often are using multiplications so the use of multiplications cannot be fully avoided. Alternative strategies to compute the logarithm suffer from similar challenges.
As in Section 7.1, we want to draw a number a ′ uniformly from [0, L), compute a = −b log(1 − a ′ /L) and multiply it with a random sign s to get the random number as distributed according to Lap(b). To compute log(x), Cordic expects x ∈ [1/4, 1), so before applying Cordic we may need to scale its input to fit this interval.
We set L = 2 l for some sufficiently large integer l and generate a ′ as an l-bit number, i.e., ⟦a ′ ⟧ = l −1 i=0 a (i) 2 i where a (i) are random bits.
We can find the highest zero bit of a ′ as follows: set h l = 0, h ′ l = 1 and a (−1) = 0, and for The meaning of h i then is 'bit i is the highest 0-bit', and the meaning of h ′ i is 'the bits higher than bit i are all ones'. Exactly one h i equals 1 and all others are 0 among i = −1 . . . l − 1. We then can write log and ⟦x (i) − ⟧ = l −1 j=0 ⟦h j ⟧⟦a (i+j+1−l ) ⟧ (with a (i) = 0 for i < 0). Now we can apply Cordic on x. Cordic needs additions (using the Bit-add protocol), bit shifts (moving bits to the right and duplicating the highest bit), the sign(·) function (check the highest bit) and negation (invert all bits and add 1).
Protocol 7 (hidden drawing from Laplace). One can verifiably draw a hidden Laplace-distributed random number by following the steps explained above, and by providing ZKP for all computations. The ZKP are similar to those for private sampling, where parts of secret shares are transfered between parties, the parties can agree on the commitment which will represent the shared number.

THE GAUSSIAN DISTRIBUTION
In this section we elaborate several strategies to sample from the Gaussian distribution.
In particular, we are interested in protocols such that upon termination one party has a private number y ∈ Q ⟨p,ψ ⟩ and has provided a zero knowledge argument that y ∼ N (µ, σ 2 ) for some public µ and σ .
All methods require as a subprotocol sampling uniformly distributed numbers. Therefore all our protocols for private drawing numbers from the Gaussian distribution follow the same high-level structure: (1) use Protocol 2 to verifiably draw uniformly distributed number(s), (2) transform the uniformly distributed number(s) into Gaussian distributed number(s), and (3) use an arithmetic circuit matching this transformation together with compressed Σ-protocols (see Section 2.4) to prove the transformation.

The Central Limit Theorem Method
The uniform distribution over the interval [0, L) has variance L 2 /12. Let a party privately draw N random numbers For a ZKP for this relation between x and the x i we only need the homomorphic property of the Pedersen commitment (for the additions and the multiplication with a constant) and a range proof (for the rounding). This method is essentially the technique of [26] to sample from the Gaussian distribution.
Then, X 1 and X 2 are distributed according to N (0, 1). Now we use circuits of elementary functions defined in Section 6 to construct our proof. Recall parameters ψ and the number of Cordic iterations ν defined therein. Let be a vector containing all intermediate values of the computation. Then the approximation circuit is C Sqr tG (ψ , ν, 2ψ ln(2); −2⟨l⟩, ⟨ρ⟩, I Sд ) C P r od (ψ ; 2⟨π ⟩, ⟨U 2 ⟩, ⟨a π ⟩, s 1 ) C T r G (ν ; ⟨a π ⟩, ⟨s⟩, ⟨c⟩, I T д ) C P r od (ψ ; ⟨ρ⟩, ⟨c⟩, ⟨X ′ 1 ⟩, s 2 ) C P r od (ψ ; ⟨ρ⟩, ⟨s⟩, ⟨X ′ 2 ⟩, s 3 ) Private values a 1 and a 2 and public challenges z 1 and z 2 are used in the modular proofs of circuit C Mod to generate U 1 and U 2 with Protocol 2. To obtain a sample with standard deviation different than 1, the resulting samples can be scaled with an extra C P r od circuit. Different mean requires an extra addition gate.

The Polar Box-Müller Method
The polar method [36] is an optimization of Box-Müller that avoids the computation of sine and cosine by the use of rejection sampling. It samples two uniform values V 1 and V 2 in the (−1, 1) interval, and keeps the result only if 0 < V 2 1 + V 2 2 ≤ 1. Otherwise V 1 and V 2 are re-sampled. For non rejected V 1 and V 2 it computes N (0, 1).
In the private sampling, if V 1 and V 2 are rejected, the prover can just reveal them and start new uniform draws until acceptance, when it proves the correctness of accepting pairs. As in Box-Müller, parameters ν and ψ define our elementary function approximations, and a 1 , a 2 , z 1 , z 2 are used to generate V 1 and V 2 in Protocol 2. Let I P ol = (⟨s⟩, I Sд , ⟨d⟩, ⟨α⟩, ⟨l⟩, I Lд , s 6 , s 5 , s 4 , s 3 , s 2 , s 1 , ⟨α⟩,

The Inversion Method
Inverting eq. (1) we get There are many numerical strategies to approximate either erf or erf −1 , of which we implemented two. A first strategy due to [18] is to use the series However, its approximation error gets larger as x gets bigger. Therefore, when x is large, it is better approximate erfc(x) = 1 − erf(x) using the series is the remainder, l!! = 1 for l < 1 and (2l − 1)!! = l i=1 (2i − 1). The number of terms L of the series is tuned for minimal error. If we set B to be the maximum error of our approximation, we use erf if A second strategy is proposed in [28] and uses a rational approximation. Therein, erf −1 is computed by where w = − log(1−x 2 ), s = √ w and p 1 and p 2 are two polynomials of degree 8. We use Cordic, product and range ZKPs to prove its computation.

Hidden Drawing
For the several strategies for sampling the Gaussian distribution described above, one can construct a protocol based on secret sharing for hidden sampling. Similar considerations apply as for the discussion in Section 7.2. As an example, we show a protocol using the Central Limit Theorem approach. ).
Finally let ⟦r ⟧ = l +log 2 N i=log 2 (N /12)/2 y (i) N 2 i−log 2 (N /12)/2 . Then, r approximates N (2 l √ 3N , 2 l ). The computations can be made verifiable using a ZKP where parties send a number to each other can agree on using the same commitment.

EVALUATION
In this section, we present an empirical comparison of several methods to privately sample from the Gaussian distribution. We will publish code to reproduce all experiments together with the final version of this paper.

Setup
We evaluate the costs of the methods presented in Section 8. Namely, the Central Limit Theorem approach (CLT), the Box-Müller (BM), the Polar Method (PolM) and the inversion method. In the latter, we evaluate the two described strategies: using series (InvM-S) and rational approximations (InvM-R). Samples are generated for the N (0, 1) distribution.
We evaluate the cost of each method instantiated for several parameters against the statistical quality of the generated samples. For the computational cost we measure the exponentiations in G (GEX), which dominate the computation. The total communication cost is the number of elements of G and Z p sent (see Section 2.4). To measure the statistical quality, we generate 10 7 samples and measure the Mean Squared Error (MSE) from the ideal Gaussian CDF.
The varying parameter for BM and PolM is the number of iterations ν of their Cordic approximations, which is chosen between 2 and 14. For CLT we vary the number of averaged uniform terms between 2 and 400. For InvM-R, the number of terms of the approximation series is changed in order to obtain different approximations with errors between 0.5 and 2 −20 . The rational approach InvM-R has no varying parameter. The representation parameter ψ which defines Q ⟨p,ψ ⟩ is chosen to be the smallest as allowed by BM, PolM, InvM-S due to approximation constraints, and for CLT is set to optimize the quality/cost tradeoff . Figure 1 shows the communication costs, i.e., the number of elements of G required to prove one Gaussian draw. Note that, as described in Section 2.4, 6 elements of Z p must be added to obtain the final cost. If high precision isn't important, CLT performs well, but in general PolM, BM and Inv-R give the best precision for a given computational investment. The Inv-R method is much simpler but can't be tuned to other precisions. Figure 2 shows the number of GEX to prove (by the party who draws the number) or to verify (by another party) one Gaussian draw. Here too, BM and PolM are the most efficient methods as soon as a good statistical quality is required. We note that, as the communication cost is logarithmic in the number of inputs and multiplication gates, several parameter settings give different points in Figure 2 but may have the same communication cost, so in Figure  1 we just show the proof with best statistical quality.

Results
For illustration, if we implement Pedersen commitments using the secp256k1 1 elliptic curve, we obtain 128 bit security and an element of G can be represented with 257 bits. One GEX using this curve takes no more than 30 microseconds on an Intel Core i7-6600U at 2.60 GHz CPU. With BM, PolM and InvM-R, a sample with MSE < 2 −20 requires less than 900 Bytes of communication. With PolM, such sample takes less than 360 milliseconds (ms) to prove and 75 ms for its verification. While CLT quickly gets very expensive, if quality is less important and an MSE > 2 −13 is satisfactory, it is the most efficient approach. A proof of a sample using CLT with MSE 0.01 can be generated in less than 10 ms, verified in 3 ms and has a size of 482 Bytes. We also note that it is possible to further optimize our implementation using special-purpose algorithms [46] to compute multiple exponentiations in the form g b .

APPLICATION: DIFFERENTIALLY PRIVATE MACHINE LEARNING
An important application of verifiable sampling can be found in the field of federated machine learning under differential privacy. Consider parties P = {P i } n i=1 where each party P i has some sensitive private data x i . The parties P want to keep their data x i private but want to collaborate to obtain statistical information θ of common interest. For example, assume that x i ∈ R and the parties in P would like to compute θ = 1 n n i=1 x i . Even if no inputs nor intermediate results are revealed, computing and sharing the exact statistic θ may impact privacy. For example, suppose thet x i = 1 if P i likes a particular idea or x i = 0 if P i doesn't like it. It may turn out that no party likes the idea of interest, in which case we would get θ = 0. If we publish that θ = 0 no party can claim anymore that it liked the idea, so its privacy is lost. Statistical notions of privacy, such as differential privacy [27], add noise to guarantee the privacy of the individuals independently of the output. In particular, let θ be a function mapping datasets on values in X. We say datasets are adjacent if they differ in the data of only one party. Then, we say a randomized algorithm A is (ϵ, δ )differentially private (DP) if for any two adjacent datasets D 1 and D 2 and for any subset X ∈ X, P( The most common strategy to make information DP before publication is to add noise from appropriately scaled Laplace or Gaussian distributions. For example, consider again the above example where the parties in P want to average their private x i . Assume that ∀i ∈ [n] : 0 ≤ x i ≤ 1. Then, for any ϵ > 0, if we setθ = θ + η with η ∼ Lap(1/ϵ) there holds thatθ is ϵ-DP. Alternatively, for any ϵ > 0 and δ > 0, if we setθ = θ + η with η ∼ N (0, 2ln(1.25/δ )/ϵ 2 ) there holds thatθ is (ϵ, δ )-DP.
It is important that no party knows the added noise η, because knowing bothθ and η would allow to reconstruct θ =θ − η. We present two protocols, one privately drawing random numbers, which produces a less accurate result, and one based on the more expensive hidden drawing which has optimal precision. Protocol 9 (DP learning using private sampling). Let P cor ⊂ P be the set of corrupted parties of size at most ρn (with 0 ≤ ρ < 1). As described in Section 8, let all parties P i (i ∈ [n]) privately verifiably draw a Gaussian random number η ∼ N (0, σ 2 /n(1 − ρ)), where noise with variance σ 2 on θ would be sufficient to achieve the desired privacy level. Then, securely sumθ = 1 n n i=1 (x i + η i ) and publishθ .
Even if the corrupted parties would collect all noise they have contributed η coll = i ∈P cor η i and subtracts it fromθ to obtain θ coll =θ − η coll /n, then there is still Gaussian noise with variance σ 2 1−ρ − ρ σ 2 1−ρ = σ 2 left on their best estimation of θ . This strategy, which adopts some ideas from [26], works best for Gaussian noise, as the sum of Gaussian distributions is again a Gaussian distribution.
Protocol 10 (DP learning using hidden sampling). Let all parties P i (with i ∈ [n]) represent their private number x i as a shared secret. Let the parties next together verifiably drawn a hidden random number η, i.e., a random number they obtain only as a shared secret. Finally, let them sum the secret shares and revealθ = n i=1 x i + η The advantage of this protocol is that no parties see η or parts of it, so it is impossible to get back towards the sensitive statistic θ . On the other hand, the full computation needs to be performed through multi-party computations, e.g., using shared secrets, which is clearly more expensive than the ZKPs which are needed in Protocol 9, especially as compressed Σ-protocols allow for ZKPs of size only logarithmic in the circuit size while calculations on secret shares have a linear communication cost.
As such drawing of random noise is a basic building block and needs to be performed repeatedly by secure federated differentially private machine learning algorithms, being able to draw from these probability distributions with low communication cost is essential to make algorithms more efficient.
Issues of Finite Precision. The work of [42] shows the impact that finite precision approximation of continuous distributions can have on DP guarantees. As explained in [42,Sec. 5.2], these vulnerabilities can be overcome by appropriately adjusting the precision and truncating the outcome after the noise is added to private values. This can be achieved in our protocols by using the correct precision parameters and range proofs.
To overcome vulnerabilities of finite precision approximations, other lines of work have explored the use of discrete distributions [17,34]. These methods require in expectation comparable computational effort as our protocols. However, sampling from a discrete distribution requires in the worst case many iterations which results in longer zero-knowledge proofs.

CONCLUSION
We have presented novel methods for drawing random numbers in a verifiable way in a public, private and hidden setting. We applied the ideas to the Laplace and Gaussian distribution, and evaluated several alternatives to sample from the Gaussian distribution.
We see several interesting directions for future work. First, we hope to develop novel strategies to let our methods scale better when in the course of an algorithm many random numbers are needed. Second, we would like to develop new methods which allow for more efficient sampling in the hidden setting where the random numbers are output as shared secrets. In particular, our current methods based on generic secret sharing techniques require multiple rounds of computation and communication, it may be possible to develop more efficient special-purpose strategies.

A COMPRESSED Σ-PROTOCOLS
In this appendix, we explain all necessary notions to understand compressed Σ-protocols [2]. In Appendix A.1, we explain the basic concepts of ZKP. Classic approaches to construct ZKP of linear relations are shown in Appendix A.2 and techniques to compress the communication cost in Appendix A.3. In Appendix A.4 we explain how linear proofs can be used to construct ZKP involving circuit computations. Finally, Appendix A.5 has an application for range proofs and Appendix A.6 discusses the costs of the techniques.

A.1 Zero Knowledge Proofs and Arguments
An interactive proof of knowledge (PoK) for an NP relation R is a protocol between a prover P and a verifier V in which P tries to prove to V that they know a witness w such that (a, w) ∈ R for a public statement a. At the end of the protocol, V either accepts or rejects the proof. We denote by (a; w) a member of a relation or an input of a protocol, using a semicolon to separate the public statement a from the private witness w. The tuple of all messages in a proof is called the conversation or transcript. Proofs may satisfy the following properties: • Completeness: a proof is complete if V always accepts the proof when (a; w) ∈ R and P knows w. • Soundness: a proof is sound if any prover whose proof for statement a is accepted by the verifier knows a valid witness w such that (a; w) ∈ R with overwhelming probability.
The notion of soundness we use is called witness extended emulation [38]. Proofs that are sound only if the prover is computationally bounded are also called arguments.
• Zero Knowledge: a proof is zero knowledge if its transcript reveals no or negligible information about the witness other than its validity.

A.2 Σ-Protocols for Linear Relations
We now show how to prove linear relations over secret committed values. Recall the definition of Pedersen vector commitments presented in Section 2.2, which defines the commitment domain Z p , our underlying cryptographic group G and vector of elements g. As explained in Section 2.2, we reserve one of the components of g for randomness so that commitments are hiding. For L : Z k p → Z p a linear function in Z p , that is L(x 1 , . . . , x k ) = a 1 x 1 + · · · + a k x k for coefficiens a 1 , . . . , a k ∈ Z p , a vector commitment P ∈ G and a value y ∈ Z p , a prover P proves to know an opening x ∈ Z k p of P such that L(x) = y. We formally describe our linear relation by Note that in our case x has a coordinate reserved for randomness, so in valid relations the correspondent coefficient of L must be 0. However, in later auxiliary protocols we will not impose such restriction. To provide a zero knowledge proof for R L we use a family of zero knowledge proofs called Σ-protocols [21] and its compressed version proposed in [2]. They provide soundness under the Discrete Logarithm Assumption (DLA). Protocol Π 0 below describes a classic proof of R L originally stated for more general types of commitments defined in [21,22] and which we instantiate for the Pedersen scheme. Π 0 takes as input (P, y; x): Protocol Π 0 (P, y, x): (1) P computes: r ← R Z k p , A = g r , t = L(r) (2) P sends to V: A, t (3) V sends to P: c ← R Z p (4) P sends to V: z = cx + r (5) V: if g z = AP c and L(z) = cy + t then accept, else reject. Theorem 2. Π 0 is a complete, sound and zero knowledge proof of R L .
We provide an intuition of how these properties are obtained. Completeness follows directly from the homomorphic property. For soundness, consider a prover P * that by following Π 0 can produce an accepting transcript ((A, t), c 1 , z 1 ) with significant probability. Then it is shown that also with non-negligible probability, by using P * 's strategy many times, another accepting transcript of the form ((A, t), c 2 , z 2 ) can be produced, where the first message is equal in both transcripts and such that c 1 c 2 . Now, since both transcripts are accepting, we have g z 1 = AP c 1 and g z 2 = AP c 2 so P * can efficiently compute the witness x = z 1 −z 2 c 1 −c 2 such that g x = P. Acceptance also implies that L(z 1 ) = c 1 y + t and L(z 2 ) = c 2 y + t and it follows that L(x) = y. Therefore is x a valid witness. Either P * already knew it or they can efficiently compute the discrete logarithms between components of g, which is a contradiction due to the DLA.
Zero knowledge is obtained by showing that all information seen in the proof is random that does not depend on the secrets. By only knowing the statement (P, y) one can compute z ′ ← R Z p , c ′ ← R Z p , A ′ = P −c ′ g z ′ and t ′ = L(z ′ ) − c ′ y, and the transcript ((A ′ , t ′ ), c ′ , z ′ ) has the same distribution as a conversation between an honest prover and a honest verifier. Note that, as it is computed in reverse, ((A ′ , t ′ ), c ′ , z ′ ) cannot be efficiently produced in the actual protocol by a dishonest prover. Note that this reasoning only holds if the verifier is honest and generates its messages uniformly at random. To circumvent this, we implement our proofs transforming them into non-interactive proofs using the Strong Fiat-Shamir heuristic [8], where the verifier is replaced by a hash function and therefore without the need of involving a trusted party. The construction is secure under the Random Oracle Model [6].

A.3 Compression Mechanism
Now we reduce the communication cost of Π 0 using ideas of compressed Σ-protocols [2] and also present in [11,14]. The transfer in Π 0 is dominated by the third message of the protocols in Step 4, with size of k elements of Z p . It can be reduced if instead of sending z, P proves that (AP c , cy + t; z) ∈ R L which would imply the condition tested in Step 5. Note that this proof does not need to be zero knowledge as z is originally revealed in Π 0 . We first present Π 1 , a proof of R that halves communication cost by "folding" z before sending it. Next, we show how to use this protocol to reduce cost of Π 0 . By assuming that k is even, we define g L = (д 1 , . . . , д k/2 ) ∈ G k/2 and g R = (д (k /2)+1 , . . . , д k ) ∈ G k/2 and analogously for x L ∈ Z k/2 p and L(a, 0) and L R (a) = L(0, a). We use, an additional group elementд ∈ G generated in the same way as the components of g.
Protocol Π 1 (P, y; x): (1) P computes: (2) P sends to V: A, B (3) V sends to P: c ← R Z p (4) P sends to V: z = x L + cx R (5) V: if (g c L * g R ) zдc L L (z)+L R (z) = A(Pд L(x) ) c B c 2 then accept, else reject Π 1 is a complete and sound proof of R L with half the communication of Π 0 . The communication of Step 4, can be further reduced by applying Π 1 recursively until the size of z is sufficiently small. Let Π B ⋄ Π A be the interactive proof obtained executing Π A except for the last message and then executing Π B . Now we can define to Π c = Π 1 ⋄ . . . ⋄ Π 1 ⋄ Π 0 where the ⋄ is applied log 2 (k) − 2 times.
Note that k requires to be a power of 2, but padding vectors with 0's is sufficient to fix this. Presented with more detail, it is proven in Theorem 3 of [2], that Π c is a complete, sound and zero knowledge protocol for R L . Completeness is straightforward, and for zero knowledge it is sufficient to see that Π 0 is already zero knowledge, and the rest of the protocol only reveals as much as Π 0 . Soundness follows from similar ideas than those shown for Theorem 2.
Amortization techniques can be applied to prove many nullity checks, where the prover claims for linear relations L 1 , . . . , L r that L i (x) = 0 for all i ∈ {1, . . . , r }. For that, V sends a random value ρ ← Z p and then P and V execute Π c on input (P, r i=1 ρ i−1 L i , 0; x). If L(x) = 0 then L i (x) = 0 for all i with overwhelming probability 1 − (r − 1)/p. Amortized nullity checks also hold when replacing linear forms by affine forms Φ 1 , . . . , Φ r where each one is the application of a linear form plus a constant. We denote this protocol by Π N and its input by (P, (Φ 1 , . . . , Φ r ); x). A prover can prove the opening of an affine map Φ : Z k p → Z r p to y = (y 1 , . . . , y r ) ∈ Z r p by running Π N on input (P, (Φ 1 −y 1 , . . . , Φ r − y r ); x) where Φ 1 , . . . , Φ r : Z k p → Z p are the affine forms that compose Φ. The communication cost of these protocols sends r − 1 elements of Z p more than Π c , which account for the size of y.

A.4 Proving Multiplications and Circuits
Now, we show the idea of [2] to prove multiplicative relations only with black-box access to Π N . For a set committed triplets Note that γ = (h(1), . . . , h(n)).
(4) V sends to P: c ← R Z p \ {0, . . . , m} (5) P and V run Π N to prove that f (c), д(c), h(c) open to some points u, v and w respectively. This is possible by Lagrange interpolation, where by having sufficient points of f ,д and h, which are in the commitment of x, they can be evaluated its domain by applying an affine form to x (6) V: if uv = w then accept else reject As before, completeness is straightforward. Zero knowledge follows from fact that f , д and h are random polynomials and their evaluations do not reveal information and that Π N is zero knowledge. If the multiplicative relations does not hold in all triplets, the probability that uv = w is negligible. From that and the soundness of Π N , soundness is obtained. Now, let C : Z k p → Z s p be a circuit, i.e. a function that only contains addition and multiplication gates in Z p . For a vector commitment P, we adapt ideas from Π M to construct a proof that P knows a opening x ∈ Z k p of P such that C(x) = 0. Suppose that C has m multiplication gates. We enumerate the multiplication gates from 1 to m. For i ∈ {1, . . . , m}, let α i and β i be inputs of the ith multiplication gate, and γ i its output. Let α,β and γ as defined in the multiplication protocol. It is not necessary to commit to α and β as a commitment of them can be obtained from affine forms on inputs (x, γ ) which are only dependent on C. Similarly, the output of C can be computed from an affine map ω : Z k +2m p → Z s p that takes as input (x, γ ). When committing to x and γ , the prover only needs to prove that multiplication gates hold and that ω opens to 0. This can be done with amortized nullity checks. Let [a] be a hiding vector commitment of a i.e., [a] = g (a,r ) where r ∈ Z p is chosen at random.
(4) P sends to V: where linear forms are obtained by Lagrange interpolation as in Step 5 of Π M . (6) V: if z 1 z 2 = z 3 then accept else reject Note that [y], additionally to α and β, is an implicit commitment to γ = (h(1), . . . , h(m)). Properties of completeness, zero knowledge and soundness, which can be found in Theorem 4 of [2], follow from the same arguments as the multiplication protocol.

A.5 Range Proofs
A straightforward application of circuit proofs are range proofs. Namely, for a secret x ∈ Z p and an integer k < log 2 (p), that x ∈ [0, 2 k ). For that you commit to the first k bits b 1 , . . . , b k of x. Then, for b = (b 1 , . . . , b k ), a circuit proof for circuit implies the range constraint. Note that in C Ra the output of the multiplication gates is 0. Therefore, in protocol Π cs , it is not necessary to include in y the elements h(1), . . . , h(k), which reduces the proof cost.

A.6 Cost of Proofs
We now briefly summarize communication and computational cost of the proofs. As previously stated in the Appendix, k is the number of inputs and m of multiplication gates in circuits. Theorems 3 and 4 of [2] give the communication costs for Π c and Π cs respectively. As we use versions of the protocols transformed by the Fiat-Shamir heuristic, the verifier does not send any message. Therefore, the proof sizes are 2⌈log(k + 1)⌉ elements of G and 3 elements of Z p for Π c , and 2⌈log(k + 2m + 4)⌉ − 1 elements of G and 6 elements of Z p for Π cs .
For the computational cost, we count the amount of group exponentiations in G (GEX) as they dominate the work. In protocol Π 0 , P performs k GEX to compute A, and V performs k + 1 GEX, which corresponds to the verification of Step 5. In Π 1 , P performs k + 2 GEX to compute A and B, and V does k/2 + 4 GEX in the verification of Step 4. Π c is a composition of one instance of Π 0 and µ = ⌈log 2 (k)⌉ − 2 instances of Π 1 . After the first Π 1 proof, k halves at each instance of Π 1 . Additionally, P and V have to compute g ′ = g c L + g R after the first Π 1 to update parameters for each of following sub-protocols. V avoids each of the verification checks except for the last one, which requires a constant amount of GEX. Therefore, P performs k In Π cs , P is required to compute a (hiding) commitment of y ∈ Z k +2m+3 p , which costs l = k + 2m + 4 GEX. Then P and V engage in Π N for an affine form of l inputs. The final costs for Π cs are then 5k + 8m + 2⌈log 2 (k + 2m + 4)⌉ + 6 GEX for P and k +2m+2⌈log 2 (k +2m+4)⌉−1 GEX for V. For a proof of membership to the range [0, 2 k ), as discussed in Appendix A.5, h(1), . . . , h(k) are not included in y which then has 2k + 4 elements. The costs are of 9k + 2⌈log 2 (2k + 5)⌉ + 11 GEX for P, and 2k + 2⌈log 2 (2k + 5)⌉ GEX for V.
We apply the same optimization done for range proofs to all of our circuits. That is, multiplication gates that are expected to be equal to 0 are not included in y. Therefore, for circuits with k inputs, m multiplication gates, and m 0 multiplication gates that will be equal to 0,

B SECURITY OF OUR PROTOCOLS
In this section, we prove the security of the protocols presented in the main paper. In the sequel, we will often restate in more detail and more formally the protocols. We consider a set of n parties P = {P 1 . . . P n }. We will denote by P −i the set of all parties except i, i.e., P −i = P \ {P i }. We assume that a subset of parties P cor ⊂ P is corrupted and controlled by an adversary A. The set P cor of corrupted parties is static, i.e. does not change after the beginning of the execution.
For the description of our protocols, we will denote the fact that a party A sends a message M to a party B by "A → B: M". Recall that we use a bulletin board for communication. Among others, this means that when a protocol contains a broadcast instruction a single message is sent from one party to all others, in practice by sending it from that party to the bulletin board, which forwards it to all other agents. We will also use hiding vector Pedersen commitments defined in Section 2.2. Recall the finite groups Z p and G defined therein. For any integer k > 0, we will denote by Com(x; r ) ∈ G the commitment of the k-dimension vector x ∈ Z k p with randomness r ∈ Z p .
We describe our security framework in Appendix B.1. In Appendix B.2, we describe our model of compressed Σ-protocols in our security analysis. In appendices B.3 and B.4 we prove the security of Protocols 1 and 2 respectively. We conclude by discussing the security of our protocols for public and private draws from some other distributions in Appendix B.5.

B.1 Security Definitions
We prove security in the simulation paradigm, using the model of malicious security with identifiable abort [33] in the stand-alone setting [29]. Therefore, we assume parties are able to detect malicious actions and can in such cases abort the protocol. Deterrence measures may be in place to discourage parties from being detected as malicious. In fact, unless parties stop participating our protocols either complete successfully or abort while detecting that a specific party is a cheater.
We start by introducing the key concepts of multiparty computation under the model of security with identifiable abort in the stand-alone setting (see [33, App. B of the full version]). A multiparty computation between our n parties in P is a protocol that computes a stochastic process F : ({0, 1} * ) n → ({0, 1} * ) n , which is also called an ideal functionality, where for all i ∈ {1, . . . , n}, the ith component of the input and output of F maps respectively to the private input and output of party P i .
In the simulation paradigm, a multiparty protocol securely computes an ideal functionality in the presence of malicious adversaries if any possible malicious behavior in the protocol caused by colluding malicious parties is not more harmful than what such party can cause in the ideal model as defined below.
Ideal Model. In this model, it is assumed that there exists a trusted party that computes F. A malicious adversary S controls a set of corrupted parties P cor . The ideal execution comprises 7 phases and goes as follows: (1) Inputs: Each party P i receives a private input u i . Additionally, S has an auxiliary input u * which represents the extra knowledge other than its regular input. (2) Send inputs to the trusted party: All honest parties send their input to the trusted party, while parties controlled by S might deviate and send what S wishes. (3) Early abort or corrupted input: S can send abort i instead of a valid input, which means that some corrupted party P i performed an early abortion or sent a corrupted input. In that case, the trusted party sends abort i (choosing the index i deterministically if many parties aborted or sent a corrupted input) to all honest parties and halts. (4) Detect cheating parties: During the execution, S can send abort i to the trusted party, which means that a corrupted party P i attempted to cheat. In that case, the trusted party sends abort i to all parties (i.e. the attempt of cheating is detected). (5) Trusted party answers the adversary: If no abort i is sent, then the trusted party sends to S the outputs of F of the corrupted parties. After receiving them, S can send abort i to the trusted party or instruct it to continue. (6) Trusted party answers the honest parties: If S instructed the trusted party to continue, then the latter sends their outputs of F to the honest parties. (7) Output: The honest parties always output what the trusted party sent to them. S outputs any arbitrary computable function of the inputs {u i } i ∈P cor , the auxiliary input u * and the messages obtained from the trusted party. Letū = (u 1 . . . u n ) be the vector of inputs of all parties and u * the auxiliary input of S. Recall that λ is the security parameter. We denote by ideal F, S(u * ),P cor (ū, λ) to the vector of outputs of parties in the above execution.
The Real Model. The real model of an n-party protocol Π describes its execution in the presence of non-uniform probabilistic polynomial time advesary A that corrupts a set of parties P cor . Parties in P \ P cor behave as described by Π. We denote by real Π , A(u * ) (ū, λ) the vector of outputs of parties in the real execution of Π with inputū and where A has auxiliary input u * .
We formalize the notion of security with identifiable abort [33, Def. 16 of the full version] below. Hybrid Model. In addition to the security definition, the simulation paradigm facilitates tools to prove security of protocols which use sub-protocols already known to be secure. Given functionalities F 1 , . . . , F k , where k is polynomial in λ, it allows to define a protocol in which both parties can send messages to each other and place "ideal calls" to a trusted party that computes F i for i ∈ {1, . . . , k}. In these ideal calls, parties can send their input to the trusted party and wait for the output of F i . However, (1) they cannot send any messages between each other after invoking an ideal functionality and before its response is returned by the trusted party and (2) functionalities cannot be called concurrently. In other words, functionalities can only be sequentially composed with all other interactions. We denote such model as the (F 1 , . . . , F k )-hybrid model. When we describe a protocol in the (F 1 , . . . , F k )-hybrid model, we say that F 1 , . . . , F k are hybrid functionalities. Sequential Composition. Consider the protocol Π and functionalities F 1 , . . . , F k as defined above and let ρ 1 , . . . , ρ k be protocols. We define the protocol Π ρ 1 ,...,ρ k to be the protocol in the ideal model that behaves exactly as Π in real messages, but for all i ∈ {1, . . . , k} each ideal call to F i is replaced by the execution of protocol ρ i . We now state conditions such in which sequential composition is secure.
We note that the model of [33] is defined for more general types of composition. Nevertheless, it is compatible with the definition described above.
Assumptions on Adversaries. We can see A as a deterministic algorithm with a special input which is a random tape of uniformly distributed bits. In the ideal model, we can also rewind A to a previous state in execution, where the random tape also rewinds.

B.2 Compressed Σ-Protocols as Ideal Functionalities
We will use compressed Σ-protocols implemented with the Fiat-Shamir heuristic which are proven secure in the random oracle model [48]. They have been proven secure against (by definition) malicious provers and there is no interaction with malicious verifiers. Therefore we can consider them as secure and abstract them as hybrid functionalities. In particular, when we write P → F R Σ : x ′ ], we mean that F R Σ gets as input from party P the public data x and secret witness w, gets from all other relevant parties as input the empty string, and returns to all parties in O the same pair [b, x ′ ] where x ′ is the data provided as input by P and b is 1 if (x; w) ∈ R (the proof succeeds) and 0 otherwise.

B.3 Proof of Protocol 1
Let U be the random variable uniformly distributed over the interval [0, L) for L ≤ p. We consider the ideal functionality i.e., F p1 takes as input from each of the n parties an empty string and outputs to every party the same uniformly distributed U .
Protocol. The protocol Π p1 is explained below. Protocol Π p1 : Security Parameter: λ Hybrid Functionality sub-protocols: Functionalities F R 1 Σ and F R 2 Σ are zero knowledge proofs of respectively Protocol: (1) For i = 1 . . . n : detect P i as a cheater and abort (2) For i = 1 . . . n: detect P i as a cheater and abort (3) For i = 1 . . . n: • output n i=1 x ′ i mod L. We state the security of protocol Π p1 in the theorem below.
Theorem 5 (Security of Π p1 ). Let Com be a computationally binding and perfectly hiding commitment scheme and let F R 1 Σ and F R 2 Σ be secure multiparty functionalities of computationally sound zero-knowledge proofs of relations R 1 and R 2 respectively. Then, Protocol Π p1 securely computes F p1 in the ( with identifiable abort if at least one party is honest.
Proof. It is clear that Π p1 securely computes F p1 in the honestbut-curious setting.
Below, we will use A to denote a non-uniform probabilistic polynomial-time (PPT) adversary that controls covert parties.
In Figure 4, we define four very similar algorithms S v , v ∈ {0, 1, 2, 3} where S 0 is a simulator and for which we will prove that their ideal execution output is indistinguishable from the hybrid execution output. Each simulator S v will internally run a copy of A which we will denote by A v . Without loss of generality, we can assume that at the points where all parties send messages, the messages of the honest parties arrive first, as everything an adversary can infer from the messages of a subset of honest parties it can also infer from the messages of all honest parties.
To see that the simulation of S 0 is indistinguishable from the )-hybrid model with adversary A, we define our simulators such that • The outputs of S 0 and S 1 are identically distributed, because their only difference is the moment on which z is chosen uniformly at random (and hence independently from other variables). • The outputs of S 1 and S 2 are identically distributed, the only difference is that S 1 first draws z in line 28 and then computes x i ′ from it, while S 2 first draws x i ′ and then computes z from it. • The outputs of S 2 and S 3 are indistinguishable. All inputs to A are the same, except for a commitment Com(0, r i ′ ) versus a commitment Com(x i ′ , r i ′ ). As the commitment scheme is hiding and r i ′ is chosen independetly, the distributions of these commitments are indistinguishable and for any A 3 getting input Com(x i ′ , r i ′ ) there is an A 2 producing a computationally indistinguishable output. • The output of S 3 is identically distributed as the output of Π p1 in the hybrid model. We can then similarly construct a protocol Π (k ) p1 that draws a vector z uniformly distributed in [0, L) k .
In this protocol we will use a pseudo-random number generator as we defined in 2.6. We use a similar but equivalent definition which is more convenient in our proof, in particular, for some polynomial p, this is a function G : {0, 1} q → [0, L) p(q) such that for any randomized polynomial time algorithm A : with µ a negligible function. We choose q sufficiently large such that p(q) ≥ k ⌈log(L)⌉ and µ(q) is sufficiently small.
We describe this protocol in the hybrid model using the ideal functionality F p1[0, 2 q ) , which is a variant on the F p1 functionality we introduced above where L is set to 2 q . The protocol Π (k) p1 is described below: if v = 0 then 3: F p1 → S 0 : z (S 0 invokes trusted party delivering ideal functionality) 4: end if 5: Phase 2: S v simulates honest parties: 6: S v : i ′ = max{i | P i ∈ P \ P cor } 7: for P i ∈ (P \ P cor ) \ {P i ′ } do 8: S v : choose x i randomly in [0, L) and r i randomly in Z p 9: if v ∈ {0, 1, 2} then 13: draw r i ′ randomly; 14: end if 17: 18: Phase 3: S v simulates A v : 19: for i ∈ P cor do 20: S v records C i , x i and r i . 22: (as if sent to members of P \ P cor ) 23: if v ∈ {0, 1} then 27: if v = 1 then  for P i ∈ P \ P cor do 36: in answer to (x i , C i ; r i )) 38: end for 39: Phase 6: S v continues simulation of A v 40: for P i ∈ P cor do 41: A v → S v : z ′ (the output of A v ) 47: output z ′ p1 ). Let F p1[0, 2 q ) be a secure multiparty functionality and G be a PRG as defined above. Then, Protocol Π (k ) p1 securely computes F (k ) p1 in the F p1[0, 2 q ) -hybrid model with identifiable abort if at least one party is honest.
Proof. It follows from the definition of random number generator that the output of the protocol is indistinguishable from the output of the ideal functionality when all parties are honest-butcurious.
The protocol consists of first a call to a secure sub-protocol and then a public computation which each agent can do for himself without any interaction. It is hence easy to see that the protocol is secure. □

B.4 Proof of Protocol 2
Now we prove the security of Protocol 2, which performs a private uniform draw. Without loss of generality, we assume that the drawn sample should be private to P 1 . The ideal functionality where C z = Com(z, r z ), z is uniformly randomly distributed in [0, L) k and r z is uniformly randomly distributed in Z p (as a consequence of the distributions of z and r z , C z is uniformly distributed over G). The pair (z, r z ) is private to P 1 while C z is known by all parties. We will describe a detailed version of Protocol 2 in the hybrid model. Our hybrid functionalities will be F p1 and F (k ) p1 , which correspond to Protocol 1, and F R m Σ , which, for some modulus L ≤ p is the ideal functionality of a zero knowledge proof for relation R m = {(y ∈ [0, L) k , r y ∈ Z p , C x ∈ G, C z ∈ G; x ∈ [0, L) k , r x ∈ Z p , z ∈ [0, L) k , r z ∈ Z p ) : C x = Com(x, r x ) ∧ C z = Com(z, r z ) ∧ x ∈ [0, L) ∧ z = x + y mod L ∧ r z = r x + r y mod Z p }.
Such proof can be performed by applying the techiques decribed in Section 6.1. In Figure 5, we define the protocol Π p2 which describes Protocol 2 in more detail. Theorem 7. Let Com be a computationally binding and perfectly hiding commitment scheme and let F R m Σ be a secure multiparty functionality of a computationally sound zero-knowledge proof of relation R m . Let F (k ) p1 and F p1[0, 2 q ) be secure multiparty functionalities as defined above. Then, protocol Π p2 securely computes F p2 in the (F (k ) p1 , F p1Z p , F R m Σ )-hybrid model with identifiable abort if at least one party is honest.
Proof. First note that, except for P 1 , the other parties are only supposed to perform ideal calls and do not interact with each other. We will consider first the most difficult case in which P 1 is among the corrupted parties controlled by A. In Figure 6, we define our adversary S that simulates the output of A in the ideal model.
We show that the ideal and hybrid execution outputs are indistinguishable. We first start by analyzing the view of A in the hybrid protocol and when interacting with S.
• Since A has not seen any other message, the first commitment C x of A is the same in both hybrid and ideal executions • y and r y have the same distribution in the hybrid and ideal model as they are part of an ideal call. Therefore, also the C z broadcasted by A follows the same distribution in the hybrid and ideal model.
• if A cheats in C z (i.e., C z is not a commitment according to relation R m ), it gets caught both in the hybrid and ideal executions • before rewinding A, S recovers x and r x such that C x = Com(x, r x ) with overwhelming probability (this is because A cannot fake the zero knowledge proof in F R m Σ except with negligible probability) • in the ideal world, S sets y and r y such that z = x +y mod L and r z = r x + r y mod p, where x and r x are chosen by A, z and y are uniformly randomly distributed in [0, L) k and r and r y are uniformly randomly distributed in Z p . This is the same distribution as in the hybrid world. • now A broadcast C ′′ z and either is detected as cheater or C ′′ z = Com(z, r z ) with overwhelming probability both in the hybrid or ideal executions Up to this point, from the view of A the transcripts simulated by S in the ideal execution and the transcript in the hybrid execution are indistinguishable. Since A runs in polynomial time, his output must be indistinguishable in both executions.
Now we analyze the case where P 1 is among then honest parties. Particularly, consider the worst case where all other parties are malicious and P 1 is the only honest party. This case is still easy since the only interaction of malicious users is to perform ideal calls to F . This procedure is secure as it is essentially a secure drawing of a uniformly distributed random number as discussed in the previous sections followed by a private but verifiable single-party post-processing. p1 draws publicly a random number from [0, L) k . • F p1Z p is a variant of F p1 that draws a random number from Z p rather than from [0, L).
• F R m Σ performs a zero-knowledge proof for the relation R m defined above 4: Protocol: 5: P 1 : 6: draw x ∈ [0, L) k and r x ∈ Z p uniformly at random 7: compute C x = Com(x, r x ) 8: broadcast C x 9: All parties in P collaboratively: 10: call F (k ) p1 to obtain a public y uniformly distributed over [0, L) k 11: call F p1Z p to obtain a public r y uniformly distributed over Z p 12: P 1 : 13: Compute z = x + y mod L and r z = r x + r y mod p 14: Compute C z = Com(z, r z ) 15: broadcast C z 16: Parties perform an ideal call to F R m Σ : 17: : (y, r y , C x , C z ; x, r x , z, r z ) 18: detect P 1 as a cheater and abort, otherwise continue the execution 20: P 1 : output (z, r z ) 21: P −1 : output C z  p1 ) 6: S: continue the run of A until after the ideal call to F p1Z p (line 11 in Π p2 ) 7: S: draw a random value r y ∈ Z p 8: S → A : r y (as if it came from F p1Z p ) 9: A → S : C z 10: S: continue run of A: C z , detect P 1 as a cheater 13: S: invoke trusted party computing F p2 14: F p2 returns ((z, r z ) ∈ [0, L) k × Z p , (C z , . . . , C z ) ∈ G |P cor |−1 ) to S and (C z , . . . , C z ) ∈ G |P\P cor | to the honest parties 15: S sets y = z − x mod L and r y = r z − r x mod p 16: S rewinds A to before the invocation of F (k ) p1 17: S continues the internal run of A, which performs ideal calls to F (k ) p1 and F p1Z p . 18: S → A : y, r y (as if they were sent by F (k ) p1 and F p1Z p ) 19: A → S : C ′′ z ; if C ′′ z C z , detect P 1 as a cheater 20: S continues to run A: C z , detect P 1 as a cheater 23: output whatever A outputs Figure 6: Simulator of advesary A in Protocol Π p2 , for the case when P 1 is corrupted.