DeVoS: Deniable Yet Verifiable Vote Updating

Internet voting systems are supposed to meet the same high standards as traditional paper-based systems when used in real political elections: freedom of choice, universal and equal suffrage, secrecy of the ballot, and independent verifiability of the election result. Although numerous Internet voting systems have been proposed to achieve these challenging goals simultaneously, few come close in reality. Wepropose a novel publicly verifiable and practically efficient Internet voting system, DeVoS, that advances the state of the art. The main feature of DeVoS is its ability to protect voters’ freedom of choice in several dimensions. First, voters in DeVoS can intuitively update their votes in a way that is deniable to observers but verifiable by the voters; in this way voters can secretly overwrite potentially coerced votes. Second, in addition to (basic) vote privacy, DeVoS also guarantees strong participation privacy by end-to-end hiding which voters have submitted ballots and which have not. Finally, DeVoS is fully compatible with Perfectly Private Audit Trail, a state-of-the-art Internet voting protocol with practical everlasting privacy. In combination, DeVoS offers a new way to secure free Internet elections with strong and long-term privacy properties.


INTRODUCTION
Determining the will of the people is the essence of free elections.However, this freedom must be protected from various threats, in particular the following.Electoral authorities can be influenced by the government or other powers to covertly destroy legitimate ballots or stuff illegitimate ones.People, especially members of marginalized groups, can be intimidated to discourage them from voting.Wealthy actors can buy people's votes to win their favor.Because such threats are real, the United Nations (UN) [45] International Human Rights Standards on Elections require that political elections be independently verifiable ( §127), that voters' ballots be secret ( §16), and that voters be protected from any form of coercion or compulsion to disclose how they intend to vote or how they have voted ( §92).
These basic requirements for free elections apply to all types of voting systems, including those that allow voters to submit digital ballots over the Internet.Such systems have been used for real political elections in Australia, Estonia, France, Norway, and Switzerland, to name but a few.
State of practice.To our knowledge, however, there are only two Internet voting systems used for political elections that claim to meet at least some of these basic requirements.But even these two systems, the ones used in Estonia and Switzerland, do not meet these requirements simultaneously or to a sufficient degree, respectively, as we explain next.
The Estonian Internet voting system IVXV [32] was designed to guarantee the privacy of the vote, and in order to avoid possible coercion, IVXV offers voters the possibility to update their previously submitted ballots in a deniable way.However, the digital ballot trail in IVXV is only partially verifiable, and even some of the supposedly verifiable parts have been shown not to be [46].Furthermore, IVXV does not provide vote privacy under the assumptions originally stated by its developers [43].
The Internet voting system used for political elections in Switzerland has been revised after several serious security problems were discovered [27].Various auditors from academia and industry have analyzed the revised version of the Swiss Internet voting system to basically confirm that it provides the intended security features [22]: public verifiability and privacy of votes.However, this system was not designed to protect against malicious actors who want to influence elections by intimidating voters or buying their votes.
Although protection against any form of coercion or compulsion, as demanded by the UN (see above), appears to be practically unattainable for the entire electorate, both for Internet and paper-based voting systems, this state of practice is disappointing.Malicious voter influence should be made as inefficient as possible to reduce its impact on the final election outcome without compromising verifiability.Fortunately, as we recall below, the state of research on this challenging problem is better than the state of the practice, albeit with significant room for improvement.

State of research.
For more than two decades, researchers have been searching for technical solutions to prevent voters from being influenced in their free formation of opinion when casting their votes in secure Internet voting systems.It has proved extremely difficult, if not impossible, to find a patent solution.In fact, the academic literature on the subject is very extensive and contains many different proposals, based on different assumptions, with different approaches and different objectives.
Some of these proposals prevent voters' local data, generated during electronic ballot casting, from inadvertently serving as evidence of their vote.This property is called receipt-freeness, and relevant work in this area includes, among others, [8,36].Other work, such as Selene [48], addresses the problem of how voters can use personal codes to verify that the vote they cast is indeed included in the final result, without these codes being able to serve as proof of the voter's vote to third parties.What these papers have in common is the assumption that voters are honest and hence do not deviate from their prescribed program to produce evidence that can serve as convincing proof to a coercer or vote buyer.
In this paper, however, we are interested in exactly this problem: how to prevent a voter from being able to convince anyone of her voting behavior, even if she actively tries to do so, e.g. because she is coerced or wants to sell her vote?Many papers have addressed this challenge.For example, in BeleniosRF [5], even those voters who try to sell their vote by using random coins chosen by a potential vote buyer do not obtain any cryptographic proof of their choices.In protocols such as JCJ [34]/Civitas [9], coerced voters can choose fake credentials that the coercer cannot distinguish from the real ones, but which ensure that the coerced votes are secretly removed by the tellers.In other schemes, such as VoteAgain [31,41], voters can overwrite their potentially coerced votes in a way that is deniable to the coercer but verifiable by the voters.We will discuss the features of these and other works in more detail in Section 2.
Beyond the individual pros and cons of the various proposals, a recent systematic review has identified a global problem [28]: there is no secure and efficient Internet voting system that can simultaneously mitigate the effects of malicious influence and keep votes secret in the long term.In fact, in all state-of-the-art systems that limit coercion or vote buying, such as BeleniosRF, JCJ/Civitas or VoteAgain, votes are encrypted using the tellers' public key and the resulting ciphertexts are then posted on a bulletin board for verification purposes.Since there can be no unconditional secrecy in a public-key setting, these votes are secret only under certain hardness assumptions (e.g.Decisional Diffie Hellman in the case of ElGamal PKE).However, since any observer can read these encrypted votes and store them for any length of time, secrecy could be undermined retrospectively by new cryptanalytic methods or more powerful computers (e.g.quantum computers).
This state of research is unsatisfactory because many elections require confidentiality to be guaranteed not just for some time, but for several decades because voters may have to fear negative consequences even if their individual choices are revealed, say, 10 or 20 years after the election.Instead, electronic voting systems should provide everlasting privacy [28], i.e. privacy without relying on any hardness assumption.However, it is far from obvious how existing systems (e.g.Civitas, BeleniosRF or VoteAgain) with strong privacy features can be modified to provide everlasting privacy and still be practically efficient.This is because these systems use specific cryptographic primitives (e.g.re-randomizable signatures as in BeleniosRF) or their overall protocols are highly complex (as in Civitas or VoteAgain), which makes it difficult to combine them with state-of-the-art Internet voting protocols that provide everlasting privacy, which deploy specific cryptographic primitives themselves (e.g.commitment-consistent encryption as in [15]).
Our contributions.We propose DeVoS, a novel publicly verifiable and practically efficient Internet voting protocol that advances the state of the art as described below.In fact, DeVoS provides the following unique combination of privacy properties: (1) Deniable vote updating: DeVoS enables voters to secretly overwrite their potentially coerced votes in an intuitive way.
(2) Strong participation privacy: DeVoS hides which voters did and did not vote, even if the voters who did vote try to convince a potential coercer otherwise.(3) Practical everlasting privacy: DeVoS is fully compatible with Perfectly Private Audit Trail (PPAT) [15], a state-of-the-art verifiable Internet voting protocol with everlasting privacy from the public (according to [28]).In this way, DeVoS can be extended to make all of its privacy features (i.e., deniable vote updating, strong participation privacy, and basic vote privacy) unconditionally private from the public, and thus long-term.
DeVoS achieves these properties under realistic assumptions about the election infrastructure (in particular, no anonymous submission channels) and under trust assumptions that are equivalent to state-of-the-art verifiable Internet voting systems with similar strong privacy features (e.g., BeleniosRF, Civitas, or VoteAgain).In particular, even if the voters and the tallying authority are fully malicious, the correctness of the result can still be publicly verified.
Since DeVoS follows the same approach as IVXV to mitigate the risk of coercion, but unlike IVXV provides full public verifiability, DeVoS offers a practical option to make political elections in Estonia more secure.
In addition to proposing DeVoS, we have formally analyzed its security in an established cryptographic security framework for evoting protocols.We have also implemented the key cryptographic components of DeVoS and provide detailed benchmarks to demonstrate its practical efficiency.
Finally, we study the trade-offs between deniability/participation privacy and efficiency of DeVoS.We propose different optimization strategies to increase efficiency while providing a relaxed yet sufficient deniability/participation privacy guarantee.We analyze these trade-offs using techniques from differential privacy.

Overview of paper.
In Section 2, we elaborate on related Internet voting protocols.In Section 3, we illustrate DeVoS and its features, we specify the threat scenario and discuss how DeVoS relates to the state of the art.In Section 4 we present DeVoS with full technical details.In Section 5, we state the security and privacy properties of DeVoS.In Section 6, we present two instantiations of the abstract DeVoS protocol, one with conditional and one with unconditional (aka everlasting) privacy from the public.In Section 7, we present efficiency benchmarks for our instantiations of DeVoS.We conclude in Section 8. We provide further technical details in the Appendix.

RELATED WORK
In the following, we briefly review the state of the art on secure Internet voting systems that mitigate the effect of malicious influence of the voters.Fake credentials.One prominent approach is the deployment of fake credentials.In such Internet voting systems, voters have two options when generating their digital ballot: (1) if the voter is free from coercion, she uses her correct credential to generate and submit a ballot for her preferred candidate, (2) if the voter is coerced to vote for another candidate, she makes up a fake credential and uses it to generate a ballot for the forced candidate.Since ballots with fake credentials are secretly but verifiably removed during the tallying process, a coercer cannot tell whether or not the voter has ultimately voted for the coerced candidate.In addition, it remains secret which voters participated in the election.
Fake credentials are used in a number of Internet voting protocols, such as JCJ [34] which was later implemented as Civitas [9].
While fake credentials work in theory, their practical value is limited.First, voters have to remember long, random credentials and enter them correctly, and second, the complexity of the tallying phase is such that only elections with small electorates could be run efficiently.There have been some attempts to mitigate these practical limitations (see, for example, [21]), but these usually come at the cost of weakening the security or privacy features.
Re-randomizable signatures.In Internet voting systems with rerandomizable signatures, such as BeleniosRF [5], the voters' digital ballots are re-randomized before being posted to the public bulletin board.With this re-randomization, the random coins used by the voter to encrypt their ballot do not provide any evidence that the voter has cast a particular vote.This feature, called strong receipt-freeness, protects against voters being able to sell their local data as a convincing receipt to potential vote buyers, even if the voters deviate from their honest program to create such evidence.In addition to the low computational overhead, another advantage of re-randomizable signatures is that they, unlike fake credentials, do not complicate the voters' casting process.
However, this approach has two major limitations.First, it reveals which voters participated in the election, which is not, for example, fully in line with the guidelines of the Venice Commission. 1 Second, this approach does not provide any resistance against the kind of 'primitive' influence where a potential on-site coercer watches a voter enter a vote into the computer and can thus easily verify that the voter is following instructions.
Deniable vote updating.Deniable vote updating is another approach to mitigating the effects of coercion.In such systems, for example VoteAgain [31,41], voters can overwrite their potentially coerced votes in a way that is deniable to the coercer but verifiable by the voters.This feature is typically realized by inserting the encrypted ballots into a swarm of "dummy" ballots, which are then secretly but verifiably canceled out along with the overwritten ballots before the tallying.Essentially the same mechanism also hides which voters actually participated in the election, even if abstaining voters want to prove to a coercer that they have abstained; we call this feature, strong participation privacy.
On a practical level, the strategies for human voters to defeat coercion in systems with deniable vote updating are more intuitive than with fake credentials: voters who are coerced to cast a particular vote can cast a new vote after the coercer has left, and voters who are coerced to abstain from voting can cast a vote whenever the coercer is absent.
The main limitation of deniable vote updating is the assumption that the coercion-free time window, which is generally required to protect against coercion, is more restricted than in the fake credential approach: in fact, the coercion-free time must lie between the presence of the coercer and the end of the submission phase in order for voters to secretly overwrite their votes.

OVERVIEW
We describe the main idea of DeVoS and explain at an intuitive level why it guarantees the features we claim.We also specify the threat scenario we consider and how DeVoS relates with the state of the art.Since DeVoS can extend any basic secure e-voting system, we first recall the concept of such systems.

Basic secure e-voting
DeVoS is compatible with both existing approaches to secure electronic ballot counting: homomorphic aggregation and verifiable shuffling.In the following, we describe the common basic structure of these two approaches, so that we can explain below how DeVoS extends them.
Cryptography.The basic cryptographic component is an IND-CPA-secure public-key encryption scheme (KeyGen, Enc, Dec).In order to tally the ciphertexts of this scheme, they need to be malleable in the following sense.
If verifiable shuffling is used to shuffle ballots, then ciphertexts must be re-randomizable, meaning that a ciphertext  ′ = Enc(, ;  ′ ) with 'fresh' randomness  ′ can be efficiently computed from a ciphertext  = Enc(, ;  ) that encrypts the same message  but with different randomness  .This computation does neither require knowledge of the secret key , nor of the message , nor of the randomness  .
The most common implementation of this primitive is ElGamal PKE and its variations.
Various non-interactive zero-knowledge proofs (NIZKPs) are used to allow parties to produce convincing evidence that can be verified by anyone to check that these parties have processed their data correctly (this is called soundness), without revealing any information other than the correctness of the respective statement (this is called zero-knowledge).
Setup phase.The election authority EA determines the dates, the set of choices, the voting method, the electorate, and any other necessary data.The EA posts this information on a public bulletin board PBB from which all participants can read and to which they can add their messages.Submission phase.Each eligible voter V  can encrypt her individual vote   under the public key  of the tallier T. 2 If the ballots are tallied by shuffling, then V  encrypts her vote   as a single ciphertext   ← Enc(,   ).If the votes are counted homomorphically, then V  first encodes her choice   as a binary vector ( , )  , where  , = 1 if and only if   is the -th candidate, and then encrypts each bit of that representation as  , ← Enc(pk,  , ).
In addition, V  also computes a NIZKP   , the exact statement of which depends on the type of tallying.In fact, if the ballots are tallied by shuffling, then   (only) proves that the voter knows the encrypted vote.This property ensures that voters create their votes independently, which is necessary for vote privacy (see, e.g., [25]).If the ballots are tallied homomorphically, then   must also guarantee that the ciphertext vector   = ( , )  contains at most one ciphertext  , which encrypts 1 while all other ciphertexts encrypt 0; otherwise, a corrupted voter could stuff illegitimate votes or remove valid ones.
Tallying phase.The tallier T takes all submitted ballots as input, removes possible duplicates (to protect against replay attacks [42] that violate vote privacy) and all ballots with invalid proofs, and extracts the ciphertexts (  )  of the remaining ballots.
If the ballots are tallied by shuffling, T first re-randomizes (  )  and then shuffles the result with some random permutation  into a new ciphertext vector ( ′  () )  .Finally, T uses the secret key sk to decrypt this ciphertext vector and posts the election result ( ′  )  on PBB.This vector is supposed to contain all the voters' choices in random order.
If the ballots are tallied homomorphically, T exploits the homomorphic property of the encryption scheme and computes a ciphertext vector ( ′  )  from ( , ) , , where each  ′  encrypts the number of votes for the -th candidate.Then T uses the secret key to decrypt this ciphertext vector and posts the election result ( ′  )  on PBB.This vector is supposed to contain the total number of votes for all candidates.
In both cases, the tallier T creates and publishes a NIZKP to prove that it has processed its data correctly.In the case of shuffling, this NIZKP proves that the input ciphertexts were shuffled and decrypted correctly, whereas in the case of homomorphic aggregation, this NIZKP only proves the correctness of the final decryption, since the correctness of the homomorphic aggregation can be verified for free.
Security and trust model.The basic approaches outlined above guarantee the two most fundamental properties of secure voting: verifiability and vote privacy.For verifiability, both the voters and the tallier can be malicious, since voters can individually verify that their submitted votes are appended to the bulletin board, and everyone can verify the well-formedness of the ballots and the correctness of the tallying by checking the respective NIZKPs.Vote privacy is ensured if the tallier T is honest while all voters can be malicious, provided that all NIZKPs are indeed ZK, that the PKE scheme is IND-CPA-secure, and that the voters' NIZKP is a proof of knowledge.
Note, however, that the basic approaches do not provide any additional privacy features that are often necessary for free elections.First, voters can easily prove to a possible vote buyer or coercer how they voted.Second, everyone can see which voters participated in the election.Third, the voters' encrypted choices are

Illustration of DeVoS
We illustrate how DeVoS extends basic secure e-voting systems, as outlined in Sec.3.1, with deniable yet verifiable vote updating and strong participation privacy.
In DeVoS, any voter can submit a new ballot that overwrites her previously submitted ballots.Since, without further means, everyone could observe which voters have re-voted and which ones have not, DeVoS hides all voters' ballots in a 'swarm' of dummy ballots.These dummy ballots do not change the voters' choices, but they are indistinguishable from the voters' real ballots.In this way, voters can secretly overwrite their votes at any time during the submission phase, providing a deniable vote update.
More specifically, DeVoS employs an additional authority called the posting trustee PT, which creates the cover of dummy ballots.This party is trusted for deniable vote updating and strong participation privacy, but not for (basic) vote privacy and verifiability.We note that all verifiable Internet voting protocols with strong privacy features (e.g., Civitas, BeleniosRF, or VoteAgain) necessarily use similar entities [7].
On a technical level, the following method is the key cryptographic component of DeVoS.Unlike in the basic protocols described in Sec. 3 At the protocol level of DeVoS, the posting trustee PT is the authority that collects all incoming ballots and periodically updates all ballot vectors ì   .For this purpose, we divide the submission    at the end of phase .These rerandomizations, which do not change the voters' choices, serve as the dummy ballots that collectively hide which voters re-voted and which voters participated in the election at all.
Once the submission phase is complete, the last ciphertext in V  's vector ì   , denoted by   , is V  's input to the subsequent (standard) tallying phase.
The correctness of the tallying phase can be verified as in the underlying basic secure voting protocol that DeVoS extends (i.e., checking the tallier's NIZKPs).

Properties of DeVoS
We explain the main features and assumptions of DeVoS.
Remark: honest vs dishonest.Throughout this paper, we say that a party is honest if it follows its specified program, and dishonest or corrupted if it can run any other probabilistic polynomial time (ppt) program.Moreover, we assume that all corrupted parties are controlled by one global adversary, i.e. the most pessimistic case.
Public-key infrastructure.In DeVoS, we assume that there is a trustworthy public-key infrastructure of the voters.This is a common assumption in Internet voting systems, both in those with and without additional privacy features (e.g., BeleniosRF [5] and Belenios [12]). 3To give a practical example, in Estonia, where the IVXV Internet voting system is used, the PKI is established with the residents' ID cards.Of course, as in any (similar) Internet voting system, we must assume that voters in DeVoS do not reveal their credentials/secret keys, since these are necessary to authenticate voters and thus to ensure that only eligible voters can vote.
Verifiability.DeVoS preserves the verifiability of the basic voting protocol that it extends.Due to the soundness of the NIZKPs    , only V  can actually overwrite her previously cast votes, while the posting trustee PT can only append valid ballots that contain a choice for the same vote that V  cast last.At the same time, each individual voter can verify whether her submitted ballot was appended to her ciphertext vector at the end of the respective micro submission phase.Therefore, each voter's input   to the tallying phase will contain V  's actual choice, even if PT is corrupted.This means that the only new entity in DeVoS, namely PT, does not need to be trusted for verifiability either.
However, we currently do not know how to reduce the trust in the voting devices in DeVoS under realistic assumptions, while for the underlying basic secure protocols there are such methods (see, e.g., [44,47]).The hybrid method with paper sheets in Be-leniosVS [10] could provide a possible approach to solving this problem for DeVoS.
Vote privacy.DeVoS does not introduce any additional trust assumptions for vote privacy.To see this, recall that the voters' NIZKP    are proofs of knowledge, which preserves ballot independence, and that they are ZK, which guarantees that no information about the voters' choices is leaked by these proofs.
Deniable vote updating.Consider the case where a voter V  is coerced at some point during the submission phase to cast a vote for some candidate  that the coercer prefers.In practice, this could happen if the coercer looks over the voter's shoulder and checks that she enters a vote for  and submits the resulting digital ballot.DeVoS protects against this type of coercion, assuming that the posting trustee PT is honest, since the coerced voter can later submit a ballot for her favorite choice  after the coercer has left the location.Due to the ZK property of the disjunctive NIZKP  Enc , for any new ciphertext in the voter's vector ì   on the public bulletin board PBB, the coercer cannot distinguish whether this ciphertext is a new vote by the voter overwriting the coerced vote for , or a dummy ballot by the posting trustee preserving the coerced vote.
We note that coercion-resistance in general requires that the voter cannot be monitored all the time, and therefore we need to make such an assumption for deniable vote updating in DeVoS as well.Of course, the assumption in DeVoS and related protocols such as VoteAgain [31,41] is more specific, since it restricts the coercion-free time to the time between coercion and the end of the submission phase.Although not all voters will thus be able to use the deniable vote update of DeVoS, we believe that it can still dramatically mitigate the effect of coercion in practice.
Strong participation privacy.If, in addition to the tallier T, the posting trustee PT is honest, then DeVoS hides which voters participate in an election and which ones do not.In fact, the ZK property of the disjunctive NIZKP  Enc and the IND-CPA-security of the PKE scheme together obfuscate whether a ciphertext    is a rerandomization of the previous ciphertext or a new encryption of a valid candidate.At the same time, if a voter V  does not participate in an election, then her input   to the tallying phase encrypts a choice for abstention, since   is then a re-randomization of the initial ciphertext  0  .These two observations imply that no observer is able to distinguish whether V  has submitted some ciphertext    during the submission phase or not.
While the reasoning above explains why DeVoS hides whether a voter who correctly follows her prescribed program has participated in an election, it also implies that a voter, who deliberately wishes to demonstrate to a coercer that she abstained from voting cannot do so convincingly.In fact, due to the soundness of  Enc , there is only one designated option for all voters to choose "abstain", which precludes a coerced voter from choosing a unique invalid message that could prove her abstention.These arguments illustrate why DeVoS even provides strong participation privacy, which protects, among others, against forced abstention attacks.
We note that strong participation privacy in DeVoS is guaranteed under a weaker assumption on the coercion-free time window than deniable vote updating.Recall from above that for deniable vote updating we have to assume that coerced voters can secretly overwrite their coerced ballots after coercion has occurred but before the submission phase ends.For strong participation privacy, however, the coercion-free time is not restricted to the end of the submission phase, but it can be in any time window (for example at the beginning of the submission phase), which is the most minimal assumption that we can generally make for anti-coercion features.
Practical everlasting privacy.DeVoS is fully compatible with the homomorphic and the shuffling versions of the Perfectly Private Audit Trail (PPAT) voting protocol [15].According to a recent systematization-of-knowledge [28], the PPAT protocols are stateof-art secure Internet voting protocols with practical everlasting privacy.In such protocols, privacy is guaranteed unconditionally towards the public, meaning that even a computationally unbounded observer would not be able to break vote privacy.
In the PPAT protocols, there is an additional secret bulletin board SBB, which is accessible only to the tallier but not to the public.The SBB contains all election data, including the ciphertexts, which remain only computationally secret due to the limitations of publickey encryption.The PBB, on the other hand, which can be read by anyone, publishes only unconditionally hiding information about the voters' choices, which is, however, sufficient to verify the correctness of the final election result.Cryptographically, PPAT uses a function to derive commitments  from ciphertexts  for the same secret messages deterministically (without knowledge of the secret key ); analogously, the voters' NIZKP and the tallier's NIZKP can be transferred from the ciphertexts to the commitments.Now, we essentially combine DeVoS with PPAT as follows; see Sec. 7 for full details.Instead of publishing the voters' ciphertext vectors ì   on PBB, the posting trustee PT only shares these vectors with the tallier on SBB.Cryptographically, we construct a NIZKP for the voters' disjunctive statement on the ciphertexts, from which an analogous NIZKP can be derived for the derived commitments.Using this feature, the posting trustee PT can extract from each vector ì   , that is shared on SBB, a vector ì   of unconditionally hiding commitments (with associated NIZKPs), that is published on PBB.The voter V  can then individually verify whether the derived commitment of her submitted ciphertext appears on the public bulletin board PBB, which is sufficient to ensure that her vote is actually counted.
In this way, DeVoS provides an unconditional, and therefore longterm, means of guaranteeing vote privacy, deniable vote updating and participation privacy without compromising verifiability.
Efficiency.We have implemented our PPAT instantiation of the disjunctive NIZKP  Enc , the only non-standard cryptographic primitive in DeVoS.Our results show that, for example, 10,000 dummy votes can be computed per minute on a single thread on a standard laptop for 3-candidate elections; see Section 7 for further benchmarks.Since the dummy ballots can be generated by different servers/threads running in parallel, this task can easily be parallelized in practice.
In addition, the workload of the posting trustee PT can be reduced not only by distributing its role, but also by reducing the size of the dummy cover.In fact, if we implement DeVoS as illustrated in Sec.3.2, then the posting trustee needs to create up to  dummy ballots per interval, where  is the number of voters.While such a large coverage of dummy ballots creates an ideal level of deniable vote updating, we can reduce the size of the dummy ballot cover without significantly reducing this level.First, note that dummy ballots are most effective at the end of the submission phase, so we can safely dispense with the dummy cover before that part.In Estonia, for example, voters can submit their votes online for several days.If we use DeVoS in such elections, we can limit the dummy cover to the last day or even the last hours of the election.Second, we have analyzed that DeVoS achieves a reasonable level of deniable vote updating (as measured by differential privacy [19]) even if PT creates a dummy cover for only some of those voters who did not submit a vote in a given interval (see Sec. 7.2).For example, the level of deniable vote updating remains strong even if we halve the size of the dummy cover.
This demonstrates that DeVoS can be used efficiently in practice for any size of the electorate when the computational power of the posting trustee scales linearly with the number of voters.

Threat scenario
We have summarized the security and privacy features of DeVoS in Table 2.We make the following key observations: (1) We make no assumptions about the basic properties, public verifiability and privacy of the vote, other than those made by basic secure Internet voting systems such as Belenios [12].(2) We make no assumptions about everlasting vote privacy other than those made by PPAT [15].(3) We only require the honesty of the posting trustee for the additional privacy features, deniable vote updating and strong participation privacy.
We fully trust the voters' voting devices for all security and privacy properties.(5) We make the assumption for strong participation privacy that voters have a coercion-free time at an arbitrary point in the submission phase, while we strengthen this assumption for deniable vote updating.
Trusted parties.The third and fourth properties are equivalent to related state-of-the-art Internet voting protocols such as Civitas, BeleniosRF or VoteAgain.We do, however, not know how to distribute trust in these entities for any of these systems, including DeVoS.We also note that reducing trust in the bulletin board is usually independent of the specific voting protocol and we therefore refer to [14,33,35] for details, in which this problem is studied.
Regarding the fourth property, there are hybrid solutions with paper-based voting sheets, such as BeleniosVS [10], which reduce trust in the voting devices, but we are not aware of any viable solution in a fully remote environment.
Coercion-free time.We also note that in the fifth observation, the first assumption is minimal, while the second one is stronger than in fake credential approaches such as Civitas, which only require that voters are free from coercion at an arbitrary point.There is a proposal in the literature, called KTV-Helios [38], which mitigates this assumption by enabling voters to overwrite anticipated coerced votes in advance; however, in KTV-Helios, the mental work for the human voters is more complex than in DeVoS (or VoteAgain), and KTV-Helios does not protect against forced abstention attacks, while DeVoS does.
Channels.As in any secure Internet voting protocol with a voter PKI, even basic secure ones like Belenios, we assume for each feature that the voters obtain their credentials secretly from the certificate authority (in our case EA); this minimal assumption is denoted as EA → V  .For everlasting vote privacy, we need to additionally assume that the adversary does not learn the messages sent from the (honest) voters to the posting trustee as well as the messages exchanged between the election authorities; this minimal assumption is denoted as V  → PT → SBB ↔ T. For (everlasting) deniable vote updating and (everlasting) strong participation privacy, we need to strengthen the assumption on the casting channel and additionally require that the adversary does not learn when a (coerced) voter sends a message to PT; this assumption is denoted as V  ⇒ PT.
Summary.At this point we would like to summarize against which types of influence DeVoS protects and what role DeVoS therefore plays in research.Roughly speaking, DeVoS protects against 'primitive' forms of influence.Firstly, against coercers who look over voters' shoulders and check that voters also vote for the coerced candidate.As explained earlier, in DeVoS voters can only overwrite their coerced vote if they have enough time to do so before the end of the submission period.DeVoS therefore does not provide complete protection against coercion, but it does provide an efficient way to significantly reduce the impact of coercion on the final result.Furthermore, DeVoS protects against passive observers or even coercers who want to check which voters have voted based on the publicly available information on the bulletin board.
Unlike, for example, BeleniosRF, DeVoS does not provide any protection against vote buying based on cryptographic evidence.A voter who casts her vote in the last interval and is able to manipulate her voting device to output her temporary cryptographic data (i.e. the random coins  that encrypt her vote) can thus prove to an equally technically skilled vote buyer that her last ciphertext  is an encryption of ; to this end, the vote-buyer needs to check whether  = Enc(, ;  ) holds true.However, as mentioned above, BeleniosRF does not protect against over-the-shoulder coercion by less technically skilled manipulators.

SPECIFICATION
We present the DeVoS voting protocol with full technical details.

Cryptographic primitives
We start with the cryptographic primitives used in DeVoS.Instead of relying on concrete primitives, the security of DeVoS (Sec.5) can be guaranteed under certain assumptions the cryptographic primitives have to satisfy.We will demonstrate in Sec.6 how to instantiate the generic DeVoS protocol efficiently.
Observe that all of the following primitives are standard in modern secure e-voting, except for the disjunctive NIZKP  Enc for which we construct appropriate instantiations in Sec. 6. Public-key encryption scheme.We use an IND-CPA-secure publickey encryption scheme E = (KeyGen, Enc, Dec).
If ballots are counted by shuffling, we assume that ciphertexts in E are re-randomizable, i.e., there exists a ppt algorithm ReRand which takes as input the public key  and an arbitrary ciphertext  = Enc(, ;  ) and outputs ciphertext  ′ such that  ′ = Enc(, ;  ′ ) for some (fresh and unknown) random  ′ .
If ballots are counted homomorphically, we additionally assume that ciphertexts are additively homomorphic, i.e., there exists a pt algorithm Sum which takes as input the public key , ciphertexts  1 = Enc(,  1 ) and  2 = Enc(,  2 ), and outputs a ciphertext  3 = Enc(pk,  3 ) that encrypts the sum NIZKP of correct key generation.We use a NIZKP  KeyGen for proving correctness of a public key  w.r.t.E. The underlying relation R KeyGen is  = ,  = (, ) ∈ R KeyGen ⇔ (, ) = KeyGen( ).
One-way function.We use a one-way function  over {0, 1} * .NIZKP of correct encryption.We use a NIZKP of knowledge  Enc for proving correctness of a ciphertext  ′ w.r.t.public key , public verification key , (previous) ciphertext , and set of valid choices Note that the relation R Enc is a disjunction of two statements: (1)  ′ is a re-randomisation of , or (2)  ′ is a "fresh" encryption of a valid message  ∈ C and the prover knows a valid secret signing key  for public verification key .
NIZKP or correct tallying.Since the tallying method of the underlying basic secure e-voting system does not need to be modified, we do not specify the NIZKPs that are used in the tallying phase, but refer to [29].

Protocol participants
The DeVoS protocol is run among the following participants: Election authority EA, public bulletin board PBB, voters V 1 , . . ., V  , tallier T, and posting trustee PT.Observe that all participants except for the posting trustee PT are standard (see Sec. 3.1).In order to simplify the presentation of DeVoS, we assume that the respective programs by T and PT are run by single participants, noting that their roles can be distributed using standard techniques.
We assume that all messages from the officials/trustees (i.e., EA, T, and PT) on the public bulletin board PBB are authenticated; this could be achieved by using digital signatures.We also assume that voters implicitly authenticate themselves to the posting trustee PT, and that PT only accepts ballots from the respective authenticated voters.Since the exact method of authentication is not relevant to the overall protocol, we abstract away from it here.

Setup phase
We describe the honest programs run in the setup phase of DeVoS.
Election authority.The election authority EA determines all election parameters and posts them to the bulletin board: security parameter 1 ℓ , list of eligible voters ì id, micro submission periods  0 ,  1 , . . .,   ( 0 : starting time,   : end time of submission period), election ID id election , and the set of valid choices C.
For each voter V  ( ∈ ì id), the election authority EA chooses a secret "signing" key   ← {0, 1} ℓ uniformly at random and computes the corresponding public verification key as   ←  (  ), where  is the one-way function mentioned above.The authority publishes the list of public verification keys (  )  ∈ ì id on PBB and Figure 2: Submission phase of DeVoS for voter V  who voted and voter V  ′ who abstained.
Decryption trustee.The decryption trustee T runs the key generation algorithm of the public-key encryption scheme E to generate its public/private (encryption/decryption) key pair (, ).In addition, T creates a NIZKP  KeyGen to prove the validity of  and posts (,  KeyGen ) to the public bulletin board PBB.

Submission phase
We describe the honest programs run in the submission phase.
Voter.Voter V  runs the following program: ( At the end of the micro-submission period, the posting trustee PT updates all ballot vectors ì   on the public bulletin board PBB.

Tallying and verification phases
The tallying phase of DeVoS is standard: either verifiable shuffling [29] or homomorphic aggregation with subsequent verifiable decryption.We denote the NIZKP by the tallier do prove the correctness of the tallying phase by  T .Any participant, including the voters or external observers, can verify the correctness of the previous phases, essentially by checking the correctness of all NIZKPs published during the setup, submission, and tallying phase.

SECURITY
We present the main results of DeVoS' security in terms of verifiability, vote privacy, deniable vote updating, and strong participation privacy.Concerning the last two, we also provide differential privacy results.

Verifiability
We present the verifiability result of DeVoS.
Definition.An e-voting protocol is verifiable if the final election result is accepted only if it corresponds to the actual choices of the voters.We follow [11,39] to formalize verifiability as follows.
The verifiability definition [11,39] is centered around a "virtual" entity, called the judge J.In reality, the program of the judge can be executed by any party, including external observers and even voters themselves.In a given protocol run, the judge J takes as input solely public information (e.g., the zero-knowledge proofs in DeVoS published on PBB) and then performs certain checks.
Specifically, the judge J in DeVoS performs the following checks and accepts a protocol run if all of them pass, otherwise it rejects it.The judge reads from PBB the data published by the election authority during the setup phase.The judge then uses this information to verify all the NIZKPs published on PBB during the setup, submission, and tallying phase.The judge also checks to see if any voter has posted a complaint on PBB that their submitted ballot was not included.Note that the input to the judge in DeVoS is only public data from PBB.
Result.We make the following assumptions for verifiability: (V1) The PKE scheme E with re-randomization is correct (for verifiability, IND-CPA-security is not needed).(V2) The function  : {0, 1} * → {0, 1} * is one-way.(V3)  KeyGen  T are NIP systems (for verifiability, ZK is not needed) and  Enc is a NIZKP.(V4) The voting authority EA, the public bulletin board PBB, and the judge J are honest.(V5) The messages from EA to any V  remain private.
Note that the trust assumptions (V4) and the assumption that voters obtain their voting credentials privately (V5) are the same as in basic secure e-voting protocols.The other assumptions, (V1) to (V3), can be realized with appropriate cryptographic building blocks (see Sec. 6).
We now state the verifiability result of DeVoS.We present and prove the formal theorem in Appendix B. Theorem 5.1 (Verifiability (informal)).Under the assumptions (V1) to (V5) specified above, DeVoS is verifiable by judge J.
Verifiability essentially follows from the global relation that is implied by the local relations of all individual NIZKPs published on PBB and the fact that each individual voter can verify whether the ballot she submitted has been appended to her ciphertext vector at the end of the respective micro submission phase.
Remark.We note that DeVoS shares the following drawback with all e-voting protocols that offer strong privacy features (e.g., BeleniosRF, Civitas, KTV-Helios); according to [7], this property is intrinsic to such protocols.If the party (in our case EA) that provides the voters' public-key infrastructure and the additional party (in our case PT) that ensures the strong privacy features are corrupt and collude, then they can manipulate the election result undetected.In practice, it is therefore important to make sure that these two agents are run by different entities whose trustworthiness is carefully checked (by non-cryptographic means).

Vote privacy
We present the vote privacy result of DeVoS.
Definition.An e-voting protocol provides vote privacy if all data published during an election does not reveal more information about individual voters' choices than what can be deduced from the final election result.
We follow [40] to formalize vote privacy as follows.Their definition measures the privacy loss of a voting protocol as the advantage of an observer in distinguishing whether an arbitrary honest voter V obs , called the voter under observation, has voted for the valid choice  0 or the valid choice  1 .In short, we say that a voting protocol offers vote privacy if, for any voter under observation V obs , this advantage is negligibly close to the advantage of any observer in an ideal voting protocol that merely outputs the final result.
Result.We make the following assumptions for vote privacy: (VP1) The PKE scheme with re-randomization is IND-CPA-secure.(VP2) The function  : {0, 1} * → {0, 1} * is one-way.(VP3)  KeyGen ,  T are NIZKPs and  Enc is a NIZKPoK.(VP4) The voting authority EA, the public bulletin board PBB, and the tallier T are honest.(VP5) The messages from EA to any V  remain private.
Note that the trust assumptions (V4) and the assumption that voters obtain their voting credentials privately (V5) are the same as in basic secure e-voting protocols.The other assumptions, (V1) to (V3), can be realized with appropriate cryptographic building blocks (see Sec. 6).
We now state the vote privacy result of DeVoS.We present and prove the formal theorem in Appendix C. Theorem 5.2 (Vote privacy (informal)).Under the assumptions (VP1) to (VP5) specified above, DeVoS offers vote privacy.
Vote privacy is essentially a consequence of the privacy of the basic voting protocol, which DeVoS extends.

Deniable vote updating
We state the deniable vote updating result of DeVoS.
Definition.An e-voting protocol provides deniable vote updating if all data published during an election hide which voters updated their potentially coerced votes during the submission phase and which ones did not.We formalize this property as a specific instance of the generic definition of coercion-resistance proposed in [40].
The coercion-resistance definition of [40] requires that each coerced voter has the option of executing a counter-strategy that (1) allows the voter to achieve her own goal, but (2) is indistinguishable from the program the voter would execute if she obeyed the coercer.In the case of deniable vote updating, we consider coercers who want to voters to submit a particular ballot (e.g., a vote for ), while the voters want to overwrite their previously submitted coerced ballots (e.g., with a vote for ).The concrete counter-strategy that these voters can use in DeVoS is trivial: they simply submit a new ballot for their favorite candidate.
Assumptions.For deniable vote updating of DeVoS, the first three assumptions (DVU1) to (DVU3) are equal to (VP1) to (VP3) (see above), and we assume: (DVU4) The voting authority EA, the public bulletin board PBB, the tallier T, and the posting trustee PT are honest.(DVU5) The messages from any V  to PT and their times remain private.(DVU6) For each coerced voter, there exists a micro submission phase after the coercer's supervision.The additional assumptions (DVU4) to (DVU6) are essentially standard in coercion-resistant e-voting protocols, such as Civitas [9] or VoteAgain [41].First, as specified in (DVU4), they require an entity that is trusted to protect against coercion (in our case the posting trustee PT).Second, they assume that voters cannot be coerced during the entire submission phase: in our case, we assume that a coerced voter can submit a new ballot after the coercer has left (DVU6).We believe that this assumption is reasonable for a significant part of the electorate in ordinary real elections, which means that the impact of coercion can be mitigated significantly with our technique for such elections.
Result.We now state the deniable vote updating theorem of DeVoS.We present and prove the formal result in Appendix D. Theorem 5.3 (Deniable vote updating (informal)).Under the assumptions (DVU1) to (DVU6) specified above, DeVoS offers deniable vote updating.

Strong participation privacy
We state the strong participation privacy result of DeVoS.
Definition.An e-voting protocol provides participation privacy if all data published during an election hide which honest voters participated in the election and which ones did not.This notion can be formalized directly with the vote privacy definition of [40] (see above) by considering "abstention" as a valid choice.
We say that an e-voting protocol achieves strong participation privacy if it provides participation privacy and also guarantees that voters cannot prove convincingly that they abstained.We formalize the latter property as a specific instance of the generic definition of coercion-resistance proposed in [40] as follows.
Recall from above that the coercion-resistance definition of [40] requires that each coerced voter has the option of executing a counter-strategy that (1) allows the voter to achieve her own goal, but (2) is indistinguishable from the program the voter would execute if she obeyed the coercer.In the case of strong participation privacy, we consider voters whose goal is to participate in the election, while the coercer wants them to abstain from voting.The concrete counter-strategy that these voters can use in DeVoS is trivial: they simply submit a ballot.
Assumptions.For strong participation privacy of DeVoS, the assumptions (PP1) to (PP6) are the same as (DVU1) to (DVU6) that we made for deniable vote updating (see above).Again, we note that these assumptions are essentially standard in such voting protocols.
Result.We now state the strong participation privacy theorem of DeVoS.We present and prove the formal result in Appendix D. Theorem 5.4 (Strong participation privacy (informal)).Under the assumptions (PP1) to (PP6) specified above, DeVoS offers strong participation privacy.
Essentially, DeVoS offers strong participation privacy due to PT's dummy ballot cover, as well as the voters' NIZKP  Enc which ensures validity of the votes and thus protects against coercers who demand that voters submit unique invalid messages.

INSTANTIATIONS
We propose two concrete instantiations of the abstract cryptographic primitives of DeVoS (Sec.4.1).Our first instantiation is compatible with ElGamal-based Internet voting protocols, such as Belenios [12], the most widely used secure Internet voting systems in practice.Our second instantiation DeVoS  is compatible with Perfectly Private Audit Trail (PPAT) [15], the state-of-the-art verifiable e-voting framework with practical everlasting privacy.
Due to space limitations, we focus on DeVoS  in this section, and elaborate on the instantiation with conditional public privacy in Appendix E. In Sec. 7, we will show that both instantiations are practically efficient.
Idea.At a high level, in DeVoS  , we separate the data shared on the bulletin board as follows.The public part of the bulletin board, called PBB, contains all the information needed to verify the correctness of the tallying procedure and thus the final result.This part of the bulletin board is accessible to all parties in order to maintain public verifiability.The secret part of the bulletin board, called SBB, contains all the information to tally the voters' ballots.This section can only be accessed by the authorities/trustees EA, PT and T, but not by the voters or external observers.Now, the secret bulletin board SBB in DeVoS  contains the same information as the public bulletin board PBB in DeVoS, while PBB in DeVoS  contains only unconditionally hidden information about the voters' choices.In particular, just like PPAT, DeVoS  uses commitment-consistent encryption (CCE) [15], a notion of PKE that allows anyone to deterministically derive an unconditionally hiding commitment to a message from a ciphertext to the same message, without knowledge of the secret key.Unlike the ciphertexts on SBB, which can eventually be decrypted by powerful adversaries, the commitments on PBB reveal no information even to computationally unbounded adversaries.Analogously, it is possible to deterministically derive NIZKPs with unconditional ZK that prove relations between the commitments on PBB from the NIZKPs (with unconditional ZK) that prove the same relations between the corresponding ciphertexts on SBB.We illustrate this concept in Fig. 3.
The different phases of DeVoS can now easily be updated: whenever a party in the original DeVoS protocol sends a message to the bulletin board, the bulletin board posts this message to the secret section SBB and uses the CCE property to derive the corresponding (unconditionally secret) message and posts it to PBB.Cryptographic realization.We realize the abstract CCE scheme with an original instantiation from [15], which can be seen as an extended version of ElGamal PKE, from which Pedersen commitments can be derived.In the remainder of this section, we recall this concrete primitive and explain how to build the non-standard NIZKP  Enc on top of it.
The NIZKP of correct key generation  KeyGen is (again) trivial.For the NIZKP of shuffle  Shuffle , we refer to [26], which is a machine-checked variant of [52].For the NIZKP of correct decryption, we refer to [15].
The public verifiability proof of PPAT (see Appendix B) carries over to DeVoS  , since it is implied by the individual NIZKPs on PBB, whose respective relations remain the equivalent.The practical everlasting privacy of PPAT follows from the fact that all information (commitments and NIZKPs) on PBB is unconditionally secret (hiding or ZK, respectively).
Commitment-consistent encryption.A commitment consistent encryption scheme E ‡ is a tuple of PPT algorithms (Gen ‡ , KeyGen ‡ , Enc ‡ , Dec ‡ , DeriveCom ‡ , Open ‡ , Verify ‡ ), where Gen ‡ outputs public parameters  on input a security parameter, (KeyGen ‡ , Enc ‡ , Dec ‡ ) is a PKE scheme, and (DeriveCom ‡ , Open ‡ , Verify ‡ ) are defined as follows: • DeriveCom ‡ (, ): takes a ciphertext  as input and outputs a commitment  using .• Open ‡ (, ): takes a ciphertext  as input and outputs an auxiliary value , that can be considered as part of an opening for a commitment .The following correctness property should be satisfied.For any triple (, , ) output by Gen ‡ and KeyGen ‡ , any message  from the message space and any  = Enc ‡ (, ), it holds with overwhelming probability in the security parameter that Dec ‡ (, ) =  and Similarly to the CPA-secure encryption scheme used in DeVoS, we assume that it is possible to efficiently re-randomize ciphertext  = Enc ‡ (, ;  ) into  ′ = Enc ‡ (, ;  ′ ) using  and some "fresh" randomness.Moreover, we assume that any commitment  = DeriveCom ‡ (, Enc ‡ (, ;  )) can be efficiently re-randomized into  ′ = DeriveCom ‡ (, Enc ‡ (, ;  ′ )) using  and some "fresh" randomness.
We now describe the exponential CCE encryption scheme instantiated from bilinear groups, which is a slight variant of [15].Let Λ = (, G 1 , G 2 , G  , ê, , ℎ) be a description of bilinear groups, where  is a generator of G 1 , ℎ is a generator of G 2 , and ê is an efficient and non-degenerate bilinear map ê : G 1 × G 2 → G  .We assume the DDH problem is hard in both G 1 and G 2 .The scheme is defined as follows: and set  = (Λ, ℎ 1 ).The public parameter  is implicitly given as input to the rest of algorithms.
We refer to App.F for details on the NIZKP of well-formedness that we constructed.

EFFICIENCY
We study the practical efficiency of DeVoS and different optimization strategies.First, we show how the abstract cryptographic primitives employed in DeVoS (Section 6) can be instantiated and that the resulting system can be implemented efficiently.Afterwards, we analyze different approaches to significantly reduce the workload of the posting trustee while maintaining a reasonable level of deniable vote updating and strong participation privacy.

Implementation
We implemented a proof-of-concept prototype 7 of the NIZKPoK of correct encryption  Enc because this primitive is the only nonstandard primitive in our instantiation of DeVoS, and because the posting trustee PT needs to compute this proof numerous times during the submission phase.
We use the relic C-library [1] as a backend, relicwrapper [3] for binding relic with C++/python, and the boost library.The relic library is configured to use the gmp library for arithmetic operations and makes use of several optimization techniques by default, such as Comba multiplication, Montgomery modular reduction, and sliding window modular exponentiation.
Experiments.All computations are done locally in a single thread on an eight-core Intel Core i7-8565U 1.80Ghz Linux machine with 16G RAM and 1TB SSD.We start by measuring the running time of the proof and verification algorithms of the  Enc proof system in DeVoS.For the  ∈ C part of the proof system in a shufflebased tallying, we first instantiate classical range proofs for the statement  ∈ [0, 2  − 1], which boils down to proving  times that an ElGamal ciphertext encrypts either 0 or 1 [6,13].If the votes are tallied homomorphically,  ElGamal ciphertexts encrypting either 0 or 1, allow to encode |C| =  −1 choices.To simplify the benchmarks, in both tallying methods we measure the runtime of the encryption and re-randomisation algorithms of arbitrary  bits; the  Enc part is not affected by this simplification.We refer to Tab. 3 (the left part) for the running time of the proof system  Enc in DeVoS.
We can instantiate  ∈ C more efficiently, e.g., using log-proofs [17].We adapt the reference implementation [17] for log-proofs from the numerical setting to bilinear maps and measure the running time of solely the  ∈ C part.We compare the log-proofs with the classical approach used in the first part of the experiment.The results are averaged per 10 runs.We refer to Tab. 3 (the right part) for the respective micro benchmark numbers.
Analysis.The proof system uses an OR-proof of the re-randomisation (dummy ballots) and new encryptions (fresh votes by the voters) together the proof of correct choice, which depends on the number of vote choices.We note that adding the well-formedness component ( ∈ C) to the NIZK proofs does not significantly affect the overall performance.If a shuffle-based tallying is used, instantiating NIZK proofs of well-formedness using range proofs incurs only a logarithmic overhead, whereas this component is essentially free if the votes are tallied homomorphically.For a small parameter such as  = 2, which corresponds to four choices in the shuffle-based version and one choice in the homomorphic version, both computing a proof for a pair of ciphertexts and verifying the proof takes less than 12 ms, respectively.
When running in a single thread, the posting trustee can generate 10,000 votes in about 2 minutes.Note that our system is highly parallelizable, as the ballots of different voters are not related to each other.Together with the optimization, a single posting trustee can take care of 20,000 voters if each micro-submission phase lasts 2 minutes and only for half of the voters dummy ballots are added (see Section 5); this number increases to 80,000 when running on (perfect) 4 threads.The ciphertext size is 2 group elements, the encryption proof size depends on a concrete instantiation of range proofs.
We observe that, since the cost of the proof is dominated by the |C| term, we can upgrade DeVoS to DeVoS  at little extra cost.

Optimizations
We study different ways to improve the efficiency of DeVoS by reducing the size of the dummy cover.Specifically, we analyze the level of deniable vote updating in the following four strategies: (1) PT generates ballots only in the last  submission phases.
(2) The voters are divided into  ′ groups (e.g., {1, 2, . . .,  ′ }), and in the th submission phase PT adds dummy ballots only to those voters who are in the group  (mod  ′ ) and do not submit a ballot during the current submission phase.8(3) PT adds dummy ballots with probability  < 1 for those voters who do not vote during the current submission phase.(4) PT adds dummy ballots to some (randomly selected) voters who do not submit a ballot during the current submission phase, so that there is a ballot (real or dummy) for exactly a fraction  ′ < 1 of the voters. 9educing the scope of the deniability guarantee.Strategies (1) and ( 2) reduce the posting trustee's workload by restricting the dummy cover to certain time windows.Hence, any vote update outside of these windows will not be deniable.Strategy (1) reduces the workload to a fraction of   , but provides only a single window for deniable vote updates.In contrast, due to its periodic nature, strategy (2) provides   ′ distinct opportunities for deniable vote updates while reducing the cost to 1   ′ fraction.Relaxing the strength of the deniability guarantee.Another way to reduce the (computational and storage) burden on PT is to relax the deniable voting guarantee rather than reduce its range.This way, providing the same (but relaxed) is provided throughout the entire submission phase.This is the goal of strategy (3), which reduces the expected work load of PT to a fraction of .Strategy (4) improves on this probabilistic efficiency improvement by fixing the number of ballots (dummy or legit) within a micro submission phase.In the following, we analyze these strategies using Differential Privacy.Differential Privacy Relaxations.We assume the reader is familiar with differential privacy (DP) [19]; a short introduction is presented in App.G.In a nutshell, by adding noise, DP ensures that an arbitrary change on a single record in the database has a small (bounded) effect on the result.DP could be adopted to DeVoS quite naturally: the data set can be the state of the bulletin board, so the records are the votes (i.e., DP protects votes instead of voters).
If only a limited number of dummy ballots are created, this 'positive' noise makes the attacker uncertain about the validity of a recorded ballot.Without 'negative' noise (i.e.removing real votes and not dummy ones), the protection is inherently asymmetric, since the attacker can be sure that a missing ballot means that no vote was cast.Thus, there is a trade-off in such optimization: higher efficiency comes at the cost of weaker privacy protection.
Indeed, in DeVoS, the PT can only add dummy votes (and hide the presence of a ballot), but cannot remove valid ballots (to hide the absence of a ballot), as this would directly contradict the correctness of the final result.This asymmetry could be built directly into the definition, as in [37,51].In this paper we use one-sided DP (OSDP) [37], which only protects the sensitive state defined by policy .In our case, the two states are the sensitive 'presence' and the non-sensitive 'absence'.Note that OSDP does not solve the privacy leakage of non-existent votes; our goal is to show what privacy guarantee DeVoS still achieves when optimized, i.e. what is the price (in privacy) of the increased efficiency.DP analysis.First, we focus on a single submission period and show in Theorem 7.1 that strategy (3) satisfies one-sided DP with the parameter  = − log .The proof of this theorem and the result for strategy (4) are given in Appendix G. Theorem 7.1.If PT generates each dummy ballot for each voter according to the IID Bernoulli() distribution with  < 1, then strategy (3) satisfies log 1   -OSDP within a submission phase where the presence of a ballot is protected.
Extending the analysis to multiple rounds is straightforward thanks to the composition property of DP [16]: for instance, if strategy (3) yields log 1   -OSDP in a single micro submission phase, then it yields  • log 1  -OSDP for  micro submission phases.Since this grows linearly with , the DP guarantee deteriorates rapidly.In fact, DP is a worst-case guarantee, i.e. it covers cases where voters update their votes in every submission phase.Without protecting against such outliers, it is possible to have a tighter DP guarantee, as Theorem 7.2 shows for strategy (3).We present our proof in Appendix G. Theorem 7.2.If PT generates each dummy ballot for each voter according to the IID Bernoulli() distribution with  < 1 (i.e., via strategy (3)), and 'after a coercion' the voters update their votes at most  times10 , then regardless of the number of remaining micro submission phases, their votes enjoy  • log 1  -OSDP.
Recall from Section 3 that the main goal of DeVoS is to offer deniable vote updating and strong participation privacy.Therefore, assuming that each voter updates her vote at most once, the privacy parameter for OSDP that strategy (3) provides for the entire protocol run is log  −1 .Note that this is independent of the remaining rounds, i.e., of the micro submission phase in which coercion took place.As a best practice, the rule of thumb is to set the privacy parameter of any DP mechanism below 1.0.Otherwise, the privacy guarantee provided may be too weak.As a consequence, strategy (3) with  > 1  ≈ 0.37 could still imply more than 60% efficiency improvement.Finally, despite this strong result, it is important to note that DP provides a probabilistic guarantee.Thus, it is still possible (to some extent based on the privacy parameter) to use statistical tests (similar to [50]) to determine whether a vote has been updated.

FUTURE WORK
We plan to extend our initial contribution in three aspects.First, we plan to analyze the usability of DeVoS and related Internet voting protocols in order to learn which techniques are effective in practice; we conjecture that the voter ceremony in DeVoS is sufficiently intuitive to offer its strong privacy features not only in theory but also in real-world elections.Second, we intend to study how trust in the voters' voting devices in DeVoS can be mitigated, for example by augmenting the protocol with ballot sheets as in BeleniosVS.Third, we plan to complement our pen-and-paper cryptographic security analysis of DeVoS with computer-aided verification tools.the joint project EIZ: Energy Innovation Center (project numbers 85056897 and 03SF0693A) with funds from the Structural Development Act (Strukturstärkungsgesetz) for coal-mining regions.

A COMPUTATIONAL MODEL
We formally model DeVoS in a general computational framework that we can use both to analyze both verifiability and privacy properties.
Background.Our underlying computational model (see, e.g., [11] for details) introduces the notion of a process which can be used to model protocols.Essentially, a process πP modeling some protocol P is a set of interacting ppt Turing machines which capture the honest behavior of protocol participants.The protocol P runs alongside an adversary A, modeled via another process  A , which controls the network and may corrupt protocol participants; here we assume static corruption.We write =( πP ∥ A ) for the combined process.
Modeling of DeVoS.The DeVoS voting protocol can be modeled in a straightforward way as a protocol P DeVoS (, C, ) in the above sense, as detailed next.By  we denote the number of voters V  .By  we denote a probability distribution on the set of possible choices C.An honest voter makes her choice according to this distribution.This choice is called the actual choice of the voter.
In our model of DeVoS, the voting authority EA is part of an additional agent, the scheduler S.Besides playing the role of the authority, S schedules all other agents in a run according to the protocol phases.We assume that S and the public bulletin board PBB are honest, i.e., they are never corrupted.While S is only a virtual entity, in reality, PBB should be implemented in a distributed way (see, e.g., [14,33,35]).

B VERIFIABILITY
We analyze verifiability of DeVoS.For this purpose, we use the generic verifiability definition proposed in [39].We recall this definition in what follows.Afterwards, we precisely state under which assumptions DeVoS is verifiable according to [39].
Framework.The verifiability definition [39] is centered around a "virtual" entity, called the judge J.In reality, the program of the judge can be run by any party, including external observers and even voters themselves.In a given protocol run, the judge J takes as input solely public information (e.g., the zero-knowledge proofs in DeVoS published on the bulletin board) and then performs certain checks.If all checks pass, the judge accepts the protocol run, and rejects it otherwise.
In the context of e-voting, for verifiability to hold, the judge should only accept a run if "the announced election result corresponds to the actual choices of the voters".This statement is formalized by the notion of a goal  of a protocol P. A goal  is simply a set of protocol runs for which the above statement is true, where the description of a run includes the description of the protocol, the adversary with which the protocol is run, and the random coins used by these entities.
According to [39], a goal  is verifiable by the judge J in a protocol P if and only if J accepts a run  of P in which the goal  is violated (i.e.,  ∉ ) with at most negligible probability (in the security parameter).In order to capture this notion formally, we denote by Pr[( πP ∥ A ) (ℓ) ↦ → ¬, (J : accept)] the probability that a run of the protocol along with an adversary  A (and a security parameter ℓ) will produce a run which is not in , but in which J (still) returns accept.This probability should be negligible.Definition B.1 (Verifiability [39]).We say that a goal  is verifiable by the judge J in a protocol P if for all adversaries  A , the probability Pr[( πP ∥ A ) (ℓ) ↦ →¬, (J : accept)] is negligible as a function of ℓ.
For our subsequent verifiability analysis of DeVoS, we instantiate the verifiability definition with the goal  () proposed in [11].This goal captures the intuition of  given earlier.The parameter  is a Boolean formula describing which protocol participants are assumed honest.The goal  () is defined formally as described next.
Definition B.2 (Goal  () [11]).Let P be a voting protocol.Let  ℎ and   denote the set of honest and dishonest voters, respectively, in a given protocol run.Then,  () consists of all those runs of the voting protocol P where either •  is false (e.g., the adversary corrupted a voter that is assumed to be honest), or •  holds true and there exist (valid) dishonest choices (  )  ∈  such that the election result equals (  )  ∈ ℎ ∪  (modulo permutation), where (  )  ∈ ℎ are the honest voters' choices.
(V5) The messages from EA to any V  remain private.The verification procedure J of DeVoS essentially involves checking the NIZKPs published on the bulletin board PBB and whether a voter complained that her submitted ballot was not appended: if one of these checks fails, the protocol run and hence the result are rejected.Now, the following theorem states that the probability that in a run of DeVoS an honest voter's vote has been dropped or manipulated if  holds true (i.e.,  () is broken) but the protocol run is nevertheless accepted by J is negligible.Theorem B.3 (Verifiability).Under the assumptions (V1) to (V5) stated above, the goal  () is verifiable in the protocol P DeVoS (, C, ) by the judge J.
Proof.Assume that assumptions (V1) to (V5) hold true.In order to prove Theorem B.3, we need to show the following implication.If the judge J outputs accept in a given protocol run of DeVoS (in which (V1) to (V4) are satisfied), then there exist (valid) dishonest choices (  )  ∈  such that the election result equals (  () )  ∈ ℎ ∪  , where (  )  ∈ ℎ are the honest voters' choices and  is some permutation.
Assume that we are in a run  of DeVoS in which the J outputs accept.Then, due to the specification of J, each NIZKP published on B is valid.
Let V  be an arbitrary honest voter who chose   and thus submitted ballot (e ′ ,  Enc ) where e ′ ∈ Enc(pk,   ).Let e ′′ be the ciphertext appended by the posting trustee PT to V  's vector ì   right after e ′ (if any).Since each NIZKP is valid, due to the soundness of  Enc (assumption (V3)), it follows that (1) e ′′ ∈ Enc(pk,  * ) for some  * , or (2) e ′′ ∈ ReRand(pk, e ′ ) holds true.In case (1), due to the knowledge soundness property of  Enc and the fact that PT posted a valid proof for e ′′ , it follows that PT knows a witness (ssk  ,  * ,  ) for the relation vk  =  (ssk  ) ∧ e ′′ = Enc(pk,  * ;  ) ∧  * ∈ C.However, since V  is honest, there exist two possibilities for PT to learn ssk  : extracting ssk  from vk  =  (ssk  ) or extracting ssk  from V  's NIZKP  Enc for e ′ .Due to the one-way property of  (assumption (V2)), the ZK property of  Enc (assumption (V3)), and the fact that the channel from EA to V  , over which the credentials ssk  are sent, is private (assumption (V5)), we can deduce that case (1) can occur in at most a negligible set of protocol runs of DeVoS.Hence, with overwhelming probability in the security parameter ℓ, we have that case (2) occurs.
By the re-randomisation property of E (assumption (V1)), it therefore follows via induction that the last ciphertext e  in ì   , which is V  's input to the subsequent tallying phase, encrypts V  's choice, i.e., e  ∈ Enc(pk,   ).Now, let V  be an arbitrary dishonest voter and let e  be the last ciphertext in ì   .Due to the soundness of  Enc (assumption (V3)), there exist two possible cases: (1) e  is a re-randomisation of the previous ciphertext, or (2) e  is a fresh encryption of some   ∈ C.
In case (1), we can again distinguish between the same cases for the previous ciphertext, and so on.Due to the re-randomisation property of E (assumption (V1)), it therefore follows that e  ∈ Enc(pk,   ) for some   ∈ C.
From what we have shown above, we can deduce that (with overwhelming probability) the input to the tallying phase consists of ciphertexts (e  )  ∈ ℎ ∪  , where for each  ∈  ℎ the respective ciphertext e  encrypts V  's intended choice   , and where for each  ∈   the respective ciphertext e  encrypts some   .
Since the judge accepted the protocol run, the tallier's NIZKP  T to prove the correctness of the tallying phase is valid.Due to the soundness of  T (assumption (V3)), Theorem B.3 follows.

C VOTE PRIVACY
We analyze vote privacy of DeVoS.We show that the privacy level of DeVoS is ideal under minimal assumptions.To this end, we use the privacy definition for e-voting protocols proposed in [40].In what follows, we first recall this definition, and then state the privacy result of DeVoS for the shuffling-based version (because we showed that this mode is practically advantageous over the homomorphic mode, see Section 7).
Framework.The definition proposed in [40] formalizes privacy of an e-voting protocol as the inability of an adversary to distinguish whether some voter V obs (the voter under observation), who runs her honest program, voted for choice  0 or choice  1 .
To define this notion formally, we first introduce the following notation for an arbitrary e-voting protocol P. Given a voter V obs and  ∈ C, we consider instances of P of the form ( πV obs ()∥ * ∥ A ) where πV obs () is the honest program of the voter V obs under observation who takes  as her choice,  * is the composition of programs of the remaining parties in P, and  A is the program of the adversary.In the case of DeVoS,  * includes the scheduler S, the public bulletin board PBB, a set of uncorrupted voters, the mix server M, and the decryption trustee T.
Let Pr[( πV obs () ∥ * ∥ A ) (ℓ) ↦ → 1] denote the probability that the adversary writes the output 1 on some dedicated tape in a run of ( πV obs () ∥ * ∥ A ) with security parameter ℓ and some  ∈ C, where the probability is taken over the random coins used by the parties in ( πV obs () ∥ * ∥ A ). Now, vote privacy is defined as follows.
Definition C.1 (Privacy).Let P be a voting protocol, V obs be the voter under observation, and  ∈ [0, 1].Then, P achieves -privacy, if for all choices  0 ,  1 ∈ C and all adversaries  A the difference is -bounded 11 as a function of the security parameter 1 ℓ .
In other words, the level  is an upper bound of an arbitrary adversary's advantage to "break" vote privacy.Therefore,  should be as small as possible.Note, however, that even for an ideal evoting protocol with a completely passive adversary,  might not be 0: for example, there might be a non-negligible chance that all honest voters, including the voter under observation, voted for the same candidate, in which case the adversary can easily derive from the final election result how the voter under observation voted.
Result.We now state that DeVoS provides ideal vote privacy, essentially under the assumption that the mix server M and the decryption trustee T are honest.
More specifically, the formal privacy result for DeVoS is formulated w.r.t. an ideal voting protocol I voting (, C, , ) (see Fig. 4).In this protocol, all  voters pick their candidates according to the distribution , and the voter under observation votes for .The ideal protocol outputs these choices permuted uniformly at random.The privacy level  ideal (,C,,) this ideal protocol has depending on the given parameters was derived in [40].
To prove that the privacy level of DeVoS is ideal, we make the following assumptions about the primitives we use and the protocol parties involved (see Section 4):  Now, the following privacy theorem says that the privacy level of DeVoS is ideal under the previous assumptions.
Theorem C.2 (Privacy).Let C be the set of valid choices, excluding abstention.Then, under the assumptions (VP1) to (VP5) stated above, the voting protocol P DeVoS (, C, ) achieves a privacy level of  ideal ( ℎ ,C,) .
Proof.In this section, we prove Theorem C.2 which establishes the privacy level of DeVoS.This privacy level can be expressed using the privacy level  ideal (,C,) of the voting protocol I voting (, C, ) with ideal privacy (see Fig. 4).
Overview of the proof.Recall that, in order to prove the theorem for the protocol DeVoS with  voters, choice space C, voting distribution , and voter under observation V obs , we have to show that Pr[( πV obs ( 0 )∥ * ) ↦ → 1] − Pr[( πV obs ( 1 )∥ * ) ↦ → 1] is  ideal ( ℎ ,C,) -bounded as a function of the security parameter ℓ, for all  0 ,  1 ∈ C, all programs  * of the remaining parties such that the scheduler S, the public bulletin board PBB, the mix server M, the decryption trustee T, and at least  ℎ voters are honest in  * (excluding the voter under observation V obs ).
We can split up the composition  * in its honest and its (potentially) dishonest part.Let HV be the set of all honest voters (without the voter under observation) and πHV be the composition of their honest programs.Therefore, the honest part, which we denote by πH = πS ∥ πPBB ∥ πM ∥ πT ∥ πHV , consists of the honest programs πS , πB , πM , πT , πHV of the scheduler S, the public bulletin board PBB, the mix server M, decryption trustee T, and the honest voters HV, respectively.By πH () we will denote the composition of all honest programs including the program of the voter under observation V obs , i.e., πH () = πH ∥ πV obs ().All remaining parties are subsumed by the adversarial process  A .This means that we can write πV obs ()∥ * as πH ()∥ A .
In order to prove the result, we use a sequence of games.We fix  ∈ C and start with Game 0 which is simply the process πH () ∥  .
Proof.This follows from the fact that the simulator uses the ideal voting functionality I voting ( ℎ , C, ,   ) to compute the honest voters' choices.

D STRONG PARTICIPATION PRIVACY AND DENIABLE VOTE UPDATING
We analyze strong participation privacy and deniable vote updating of DeVoS.As in our analysis of vote privacy (App.C), we restrict our attention to the mix-net version of DeVoS.
Frameworks.We use the coercion-resistance definition in [40] which assumes that a coerced voter has a certain goal  that she would try to achieve in absence of coercion.Formally,  is a property of the voting protocol P. In the case of strong participation privacy,  expresses that the coerced voter wants to vote for an arbitrary candidate, so  contains all runs in which the coerced voter voted for an arbitrary candidate and this vote is in fact counted.In the case of deniable vote updating,  expresses that the coerced voter wants to vote for a specific candidate, say .
In this definition of coercion-resistance, the coercer wants the coerced voter to run a certain strategy, the dummy strategy dum, instead of the program an honest voter would run.In the case of strong participation privacy, if the coerced voter runs dum, then she abstains from voting.In the case of deniable vote updating, if the coerced voter runs dum, then she votes for the candidate that the coercer told her to vote for.Now, for a protocol to be coercion-resistance, this definition requires that there exists a counter-strategy  that the coerced voter can run instead of dum such that (i) the coerced voter achieves her own goal , with overwhelming probability, by running , and (ii) the coercer is not able to distinguish whether the coerced voter runs dum or .Similarly to the vote privacy definition (see Appendix C), this definitions measures the ability of the coercer to distinguish between these two cases.Hence,  has to simulate dum while at the same time make sure that  is achieved.In the case of strong participation privacy,  is the prescribed program that an honest voter runs in DeVoS if she votes for an arbitrary candidate.In the case of deniable vote updating,  is the prescribed program that an honest voter runs in DeVoS if she votes for her favorite candidate.Definition D.1 (Coercion-resistance).Let P be a voting protocol,  be a property of P, and  ∈ [0, 1].Then, P is -coercion-resistant w.r.t. if there exists  such that for all adversaries  A : Results.We refer to Section 5.4 for the assumptions about the primitives we use and the protocol parties involved (see Section 4).
The following theorem states that the coercion-resistance level of DeVoS for voters who decide to vote (for an arbitrary candidate) is ideal under the previous assumptions.Theorem D.2 (Strong participation privacy).Let  contain all runs in which a voter votes for an arbitrary candidate and this vote is in fact counted.Then, under the assumptions (PP1) to (PP6) stated above, the voting protocol P DeVoS (, C, ) achieves a coercionresistance level of  ideal ( ℎ ,C,) .
The following theorem states that the coercion-resistance level of DeVoS for voters who decide to vote for their favorite candidates is ideal under the previous assumptions.Theorem D.3 (Deniable vote updating).Let  contain all runs in which a voter votes for her favorite candidate and this vote is in fact counted.Then, under the assumptions (PP1) to (PP6) stated above, the voting protocol P DeVoS (, C, ) achieves a coercion-resistance level of  ideal ( ℎ ,C,) .
Proofs (sketches).Both proofs are based on the vote privacy proof presented Appendix C. First, we note that the vote privacy game and the coercion-resistance game coincide in the sense that in both cases, the observer/coercer needs to distinguish between the case that the voter under observation/the coerced voter selects a (valid) choice  0 or  1 .The only differences is that the coercer unlike the passive observer can give instructions to the coerced voter.Now, to prove strong participation privacy and deniable vote updating, we extend the sequence of games in the vote privacy proof as follows.We add a new Game 4' between Game 4 and Game 5, which states that the posting trustee PT exploits the ZK property to simulate its NIZKP  Enc .Moreover, we also specify that "abstention" is a choice that the ideal functionality can output.Then, under the additional assumptions that PT is honest (PP4) and that the adversary cannot monitor the channels between honest voters and PT (PP5), it follows that Game 8 and Game 9 are computationally indistinguishable.
It follows that a coercer, who demands a voter to abstain from voting or to vote for a particular candidate, can only deduce information from the final election result about whether the coerced voter obeyed.Since the choice which denotes abstention is the same for all voters, the election result in DeVoS does not reveal more information about the coerced voter's decision (i.e., abstention or not, or vote for the coerced candidate or not) than what can be derived from the output of the ideal functionality.This argument establishes Theorem D.2 and Theorem D.3.

E INSTANTIATION OF DEVOS WITH CONDITIONAL PRIVACY
We describe our ElGamal-based instantiation of DeVoS.
Hardness assumption.The decisional Diffie-Hellman assumption in G = ⟨⟩ (DDH for short) states the hardness for PPT adversaries of solving the following problem.On input (,   ,   ,  ) ∈ G 4 , decide whether  =    or a random element in G.
Public-key encryption scheme.We instantiate this central primitive with exponential ElGamal PKE [23].Proof of Theorem G.2.Following the proof of Theorem 7.1, because DP is symmetric (in contrast to OSDP), it is sufficient to show that Eq. 1 is violated for a dataset with a single voter.Similarly to Eq. 4, if  1 = {⊥},  2 = { } and  = {no ballot was recorded}, then on the left side of Eq. 7 the probability is (1 − ) while on the right it is zero which is a contradiction if  ≠ 1.Thus, DeVoS only satisfies DP when there is a ballot (fabricated dummy or legit) for all voters.Considering the relaxed (, ) -DP, the additive term  could cover the above case when 1 − ≤ .However, in practice,  should be extremely small as it should only capture the sporadic cases.Indeed, the best practice is to set  to the inverse of the size of the dataset.This could only be achieved by setting  very close to 1, which does not offer any meaningful improvement over the original DeVoS protocol.□

G.3 Multi-submission phases
Proof of Theorem 7.2.The guarantee OSDP provides is Eq. 1, i.e., the probability distribution (when no vote is cast) on the right side must be scaled up sufficiently (with   ) to cover the probability distribution (when  vote is cast) on the left side.The probabilities of the number of recorded ballots are illustrated in Fig. 7 after ten submissions when  = 1 and  ≈  −1 where blue and red distributions are the left and right, respectively.
More generally, when  votes are cast during  submission phase, the probabilities for each output are shown in Table 4.The first column is the number of recorded ballots, and the second and third are the events' probabilities when 0 and  votes were cast, respectively.
Focusing on the probability parts in one row, we can see that to scale the second column (i.e., blue in Fig. 7) above the third one (i.e., red in Fig. 7), one must multiply the former with  − (=   ) element-wise.
Focusing on the binomial parts in one row, we can see in Eq. 8, that the expression in the second column is greater (or equal) than the one in the third when  = {0, 1, . . .,  − }.Moreover, the difference is the largest   times when  = 0, and the smallest (equal size) when  =  − . •( ′ − q)+1 -OSDP within a submission phase, where the presence of a ballots is protected and q is the potential maximum fraction of active voters in a submission phase.
Proof.The proof is analogous with the proof presented in for strategy 3. Due to the independence of votes, it is sufficient to focus on a single voter with vote  and ballot .However, contrary to strategy 3, the probability of generating a dummy ballot for this particular voter is not independent of the other voters: when  voters voted, Pr [ ≠ ⊥| = ⊥] =  • ′ − •  − • .Substituting this formula into Eq. 5 we get the left side of Eq. 9.As this should hold even for the worst case (i.e., when maximum votes are caster including ), we can change  with  • q−1  , so the lower bound is the highest.After some arithmetic, we can see that the right side of Eq. 9 is the same as the desired formula in the theorem.

□
We leave to extend this result to multi-submission phases as future work.
Table 4: Two output distribution of the number of ballots: the first column is the number of recorded ballots while the second and the third column are the probabilities of that event when 0 and  votes were casted respectively.

Figure 1 :
Figure 1: Submission phase of DeVoS exemplified.NIZKPs are omitted.Note that the ciphertexts ( 3 1 ,  3 2 , 33 ), including two encryptions for candidate  and one encryption of 0 for abstention, are the input to the tallying phase.

Figure 4 :
Figure 4: Ideal privacy functionality for voting protocol.

Figure 7 :
Figure 7: Two output distribution of the number of ballots when  = 0.37 after ten submission phase.Blue and red corresponds to one vote and no vote 'after coercion' respectively.

•G.4 Strategy 4 Theorem G. 3 .
• • • ( −  + 1) ( + ) • • • ( + 1) (8) Thus, due to the last row when the binomial parts are equal, one must set  = log 1   =  • log 1  to satisfy Eq. 1. □ If we assume  fraction of voters vote in a submission phase and if PT generates dummy ballots for a uniformly randomly selected  ′ − voters who do not vote in the current submission phase, then strategy 4 satisfies log  •(1− q)+1

.1, for each voter V 𝑖 there exists a ballot vector ì 𝑒 𝑖 on the public bulletin board PBB, to which new ballots (𝑒
32 , 33 ), including two encryptions for candidate  and one encryption of 0 for abstention, are the input to the tallying phase.phase in DeVoS into time intervals, each of which is identified by an (increasing) integer .Now, if V  submits a ballot    to PT in interval , then PT appends that ballot to ì   at the end of phase . Ater the interval, V  can individually verify whether her ballot

Table 2 :
Threat scenario of DeVoS.The general assumptions on trusted parties (EA ∧ PBB(∧SBB)) and private channels (EA → V  ) are protocol-independent and thus implicitly assumed in the following columns. → : adversary learns when  sends a message to , but it does not learn these messages. ⇒ : adversary does neither learn when  sends messages to  nor these messages. ↔ : iff  →  and  → .
1) Pick favorite choice  ∈ C and encrypt it under  to obtain  ′ ← Enc(, ;  ).(2) Read the latest status of vector ì   , including the currently last ciphertext  in ì   , from the bulletin board PBB.(3) Use   , vote , and randomness  to create a NIZKPoK  Enc for the tuple (,   , ,  ′ , C). 5 (4) Send ( ′ ,  Enc ) to the posting trustee PT.Then, analogous to the basic voting protocol (see Sec. 3.1), the voter V  can individually verify whether her ballot ( ′ ,  Enc ) is contained in ì   as published by PT on PBB.Notably, the voter can perform this check immediately after the current micro submission phase ends and the voter exits the virtual booth.If ì   does not contain ( ′ ,  Enc ), then the voter can post a complaint to PBB.Posting trustee.Posting trustee PT runs the following program (in a given micro submission period   ): (1) If voter V  sends ballot ( ′ ,  Enc ) then: (a) Ignore ballot if  Enc is not valid w.r.t.  , , , where  is latest ciphertext in ì   , or if V  has already sent a valid ballot in   .(b) Append ( ′ ,  Enc ) to ì   on PBB otherwise.(2) If voter V  has not sent valid ballot during   then: (a) Compute  ′ ← ReRand(, ;  ), where  is latest ciphertext in ì   .(b) Use  to compute NIZKPoK  Enc for the tuple (,   , ,  ′ , C). 6 (c) Append ( ′ ,  Enc ) to ì   on PBB.
• Verify ‡ (, , , ): takes a message , a commitment  w.r.t.public key , and an auxiliary value  as inputs and outputs a bit.The algorithm checks if the opening (, ) is valid w.r.t. and .

Table 3 :
Experimental results for proving R Enc in DeVoS depending on , in milliseconds.For the mix tallying |C| = 2  , for the homomorphic tallying |C| =  − 1.On the left part, we report the running time of computing new ciphertext, proving the relation, and verifying the relation.On the right part, we only compare the cost of range proofs, instantiated classically or via log-proofs.