Time-Deniable Signatures

In this work we propose time-deniable signatures (TDS), a new primitive that facilitates deniable authentication in protocols such as DKIM-signed email. As with traditional signatures, TDS provide strong authenticity for message content, at least for a sender-chosen period of time . Once this time period has elapsed, however, time-deniable signatures can be forged by any party who obtains a signature. This forgery property ensures that signatures serve a useful authentication purpose for a bounded time period, while also allowing signers to plausibly disavow the creation of older signed content. Most critically, and unlike many past proposals for deniable authentication, TDS do not require interaction with the receiver or the deployment of any persistent cryptographic infrastructure or services beyond the signing process ( e.g., APIs to publish secrets or author timestamp certificates.) We first investigate the security definitions for time-deniability, demonstrating that past definition attempts are insufficient (and indeed, allow for broken signature schemes.) We then propose an efficient construction of TDS based on well-studied assumptions.


INTRODUCTION
Many communication systems use cryptographic signatures to verify the authenticity of data sent from one party to another over untrusted networks. While cryptographic authentication is standard in end-to-end encrypted messaging systems, it is also increasingly being deployed within traditionally non-encrypted protocols such as SMTP email. Specifically, in the email setting, protocols such as DKIM, DMARC and ARC [12] are routinely used to add non-repudiable digital signatures to email in transit between Mail Transfer Agents (MTAs): these signatures allow recipient spam filtering software to verify that it originates from the claimed sender.
While cryptographic authenticity is valuable for preventing spam and spoofing of email traffic, DKIM signatures have been re-purposed for goals that may not have been anticipated by the designers of these protocols. 1 For example, news organizations routinely verify the authenticity of leaked or stolen email collections using DKIM signatures [31,38,41]: this is possible because DKIM signing keys are long-lived, and the protocol's non-repudiable signatures can be verified long after an email has been received and processed. Organizations such as the Associated Press and Wikileaks even publish detailed instructions and tools for verifying the authenticity of DKIM signatures in leaked and stolen email corpora to facilitate such verification. Since email signing is implemented by commercial mail providers rather than end-users, users of popular services cannot opt-out. These developments have ignited a technical debate around the desirability of long-term non-repudiability guarantees in widely-used protocols such as email [23], and raised questions around the value of adding cryptographic deniability to these systems. The need for deniability. Cryptographic deniability is a property that allows communication participants to disavow authorship of messages, e.g., in the event that they have been leaked or stolen. This feature has frequently been incorporated in interactive messaging protocols [1,8,42], which historically realize deniability through the use of interactive key exchange protocols and symmetric authentication primitives such as MACs. Achieving deniable authentication in email authentication protocols such as SMTP/DKIM is more challenging since these protocols support non-interactive and asynchronous delivery via multiple intermediate recipients. Thus interactive protocols are ruled out, and even designated-verifier solutions can be more challenging due to the presence of intermediaries.
Despite these challenges, the problem of incorporating deniability for the email setting has recently received some attention. For example, in Usenix Security 2021, Specter et al. proposed two technical replacements for DKIM signing that are designed to facilitate deniability. Both protocols ensure that messages are digitally signed to enable sender-authenticity verification but feature a process wherein senders, recipients, and even third parties can create deliberate forgeries after the necessary anti-spam and spoofing checks have been completed. The two protocols employ different techniques: the first relies on the sender to author forgeries on request and/or publish expired secret keys, while the second employs a trusted time server that publishes cryptographic timestamp certificates that allow forgery of signatures after some period of time has elapsed. Others have made even simpler proposals wherein DKIM providers simply rotate and publish existing DKIM signing keys on a periodic basis [11,23]. Each proposal seeks to build signatures that are unforgeable for a period of time necessary to support short-term transport checks, but become forgeable after this period.
The major limitation of the proposals above is that forgery requires the active cooperation of signers, or else depends on the continuous operation of new trusted infrastructure such as "time servers" that publish keys or timestamp certificates on a periodic basis [40]. The challenge in email systems is that the end-users affected by non-repudiable authentication (e.g., Gmail customers) rely on third-party providers to deploy these infrastructure services and make them available for the often-controversial purpose of forging past email. If this infrastructure is not deployed, then even the Internet-wide adoption of a deniable signature standard will not provide deniability in practice. What is needed is a signature scheme that can be used in place of a normal signature scheme within protocols; provides strong authenticity for a period of time; and then subsequently becomes plausibly forgeable by any party who simply obtains such a signature, with only the requirement that parties have an (approximately) shared view of time. We refer to such signatures as time-deniable signatures. Properties of time-deniable signatures. Time-deniable signatures operate much like a normal signature scheme, but with some important differences. Like standard digital signatures, time-deniable signatures are designed to be secure and non-repudiable for at least some time period following signing. The duration of this time period is strictly limited, however: any party who obtains a signature on some message can use it as input to a new forging algorithm called AltSign that, after enforcing some approximate time delay, will output a forgery on a new chosen message ′ . A key requirement of these schemes is that neither signing nor forging should require the cooperation of any other party or infrastructure. This time delay is therefore enforced using a specific computational assumption: the AltSign algorithm requires the forger to perform a pre-specified number of sequential operations , where the minimum time required for this calculation is roughly as long as the desired length of the unforgeable phase.
Of course, the ability to forge signatures has no bearing on deniability if the resulting forgeries are easily distinguishable from authentic signatures. To achieve plausible deniability, we therefore require that forgeries are indistinguishable from signatures produced using the ordinary signing algorithm, and in fact that even linking forgeries to the specific signatures that were used to create them should be challenging. This indistinguishability property is a fundamentally novel property of this work, that is not present in previous attempts to solve this problem [3,24]. It also has important follow-on implications: since forgeries are indistinguishable from true signatures, this implies that any forgery must be useful to create still further forgeries.
Finally, we wish time-deniable signatures to be useful in practice. Given the description above, time-deniable signatures would be of limited usefulness: the revelation of a single signature would allow for an unlimited number of forgeries, rendering the signing key useless for authenticating further messages. To remove this limitation, we slightly relax our forgery and unlinkability requirements. Our constructions allow for renewability via an additional timestamp field that is specified in the signing algorithm and carried with the signature. Forgers can produce a new signature on a message ′ provided the new signature carries a timestamp ′ ≤ . For example, in a practical deployment, the timestamp can be set to correspond to some real-world time counter, and recipients can choose to accept as authentic any signature with a timestamp greater than − where is the minimal expected time needed to compute a forgery. 2 This approach requires only that honest senders and receivers possess loosely synchronized clocks.
Our contributions. In this work we investigate the problem of building time-deniable signatures. We first develop formal definitions for this new primitive, then present a construction based on several efficient components. Finally, we implement our approach and show that it is practical enough to deploy today. Concretely, we provide the following contributions: Defining time-deniable signatures (TDS). We propose new definitions for the concept of time-deniable signatures, and propose strong security definitions for this new primitive. Defining security for time-deniable signatures is surprisingly difficult: while developing our definitions, we found that previous efforts to formalize the security of deniable authentication schemes fall short. For example, we show that the security definitions for some related primitives [24] contain subtle weaknesses that admit practically-insecure constructions. To provide evidence for the robustness of our definitions, we prove that our definitions are strictly stronger than these earlier definitions. Efficient constructions. To demonstrate that the TDS primitive is practical, we propose an efficient construction of timedeniable signatures based on well-studied cryptographic assumptions. Our constructions improve on previous work [24] in that they do not require any a priori bound on the number of time epochs that the scheme can handle. We also show that TDS can be realized using standard assumptions in pairing-based cryptography and sequential puzzles based on repeated-squaring assumption [36], without the need for zkSNARKs or other heavy-weight constructions. Implementation and performance experiments. To further motivate the usefulness of TDS in systems applications, we implement our TDS constructions and show that the scheme has practical runtime and bandwidth performance for the applications we consider. In particular, we show that our scheme has a fast key setup time, which is particularly important for a scheme with an unbounded number of time epochs.

Defining Time-Deniable Signatures
We study signature schemes where signatures remain valid for a short period of time after creation. Specifically, we consider the notion of an unforgeability period that starts when a signer generates a signature for a message using its signing key sk, and the signing algorithm Sign. But once the unforgeability period elapses, any participant in the system can compute a "fake signature" (aka forgery). To allow computation of forgeries, we consider an alternate signing algorithm AltSign, that does not require the signing key sk to generate signatures. Intuitively, as long as the signatures generated by Sign and AltSign appear indistinguishable, such a notion provides deniability after the unforgeability period since a signer can claim that a signature attributed to them could have been generated by anyone.
Key Challenges in the Definition. There are several key considerations for formalizing the above intuition and defining timedeniable signatures.
Challenge I: Preventing pre-computation of forgeries. Recall that any party can compute a forgery (via the algorithm AltSign) after the unforgeability period expires. But how do we ensure that a party cannot execute AltSign in advance, thereby having the ability to sign any message within the unforgeability period? One natural approach is to bind signatures to some unpredictable cryptographic beacon, perhaps generated at regular intervals by a centralized server or a blockchain [17,34]. For example, when signing a message (via Sign or AltSign) one might actually sign the pair ( , ) where is a beacon released at a time known by the receiver. This value can then be used as the "seed" to allow forgery using AltSign, and verifiers can use the known publication time of to determine whether the signature is still within the unforgeability period. Such models have been considered in prior works, including the TimeForge scheme of Specter et al. [40] and a recent proposal by Bonneau et al. [3].
In this work, we seek to avoid the use of unpredictable timestamps or centralized servers. In our notion, the Sign and AltSign algorithms do indeed take as input a timestamp . Assuming that receivers possess loosely synchronized clocks, these timestamps can be used to verify that a received signature was authored within the unforgeability period. However, crucially, these timestamps are simply the output of a predictable clock operated by the signer, which means that we do not require any security properties of this input, nor do we require unpredictable beacons or new infrastructure to produce them. To prevent pre-computation, we instead model AltSign such that it requires a valid signature on some pair ( , ) as input. This ensures that forgers do not have the necessary input(s) to pre-compute forgeries until they obtain a signature. 3 Challenge II: Selecting forged timestamps. In the proposal above, AltSign requires a valid signature on some time (and any message) in order to compute a forgery. Naturally, the resulting forgery will also need to contain its own timestamp ′ . The selection of ′ is crucial, however: if this forged timestamp can be chosen arbitrarily by the forger, then an attacker may be able to forge new signatures that appear (to an honest receiver) to be within the unforgeability 3 Indeed, we show that the need for AltSign to use an existing signature (or portion thereof) to produce a forgery is seemingly inherent if we do not want to use secure infrastructure. We elaborate on this point in Appendix J. window, even when the original signature was not. One obvious solution to this problem is to restrict the forged timestamp to ′ = . Unfortunately, this restriction weakens the deniability properties of the signature scheme: a signer can deny having signed a particular message at time , but it cannot deny having signed some message at time . To achieve stronger deniability where a signer can also deny having signed any message at time , we further strengthen the AltSign algorithm. Namely, we require that on input a signature on timestamp , AltSign can compute forgeries for any message and any time stamp ′ ≤ . Challenge III: Avoiding strong clock synchronization. The closely related prior work of epochal signatures by Hülsing and Weber [24] considers a security notion that crucially relies on various participants having synchronized clocks. Roughly, in an epochal signature scheme, (real) time is divided into discrete epochs where a new key is generated at the start of every epoch. Signatures are associated with the epoch they were generated in, where unforgeability requirements state that no adversary can forge signatures for an epoch during the epoch. As we show in §3, the security definitions for epochal signatures are fragile: there exist epochal signature schemes that are secure under the given definitions and yet become completely insecure when clocks are even slightly out of sync. This problem stems from the fact that the unforgeability notion proposed for the primitive puts strict time limits on the adversary while it queries a signing oracle. We show that if enforcement of these query restrictions is violated (even slightly) by a real-world signing oracle at epoch , an epochal signature scheme can become catastrophically insecure for all future epochs.
Unfortunately, avoiding such outcomes is not easy, and in this work, we seek to strengthen our security definitions to avoid such issues. We do this in two ways: unlike [24], our definitions model the unforgeability period computationally -through the widelyadopted technique of bounding the number of sequential computation steps the adversary may compute [6,16,35,36,43]. While this still requires conversion when used in the real world, it does not embed the conversion into the security definition. Much more importantly, our definition allows the adversary to participate in a "pre-processing" phase to ensure the robustness of our notion in scenarios where there may be clock synchronization issues. During this phase, the adversary is given free rein (within only a polynomial time-bound) to query the signing oracle and forge signatures. This phase significantly loosens the restrictions on the adversary, allowing them to query for signatures and run the AltSign algorithm (or any other process) as many times as they wish. Once the pre-processing phase is complete, the adversary then enters a second forgery phase in which their runtime is more strictly bounded. Our sole restriction is that the forgery produced in the second phase must be computed on a timestamp * that is greater than any timestamp queried during the pre-processing phase.
Our Definition. We are now ready to provide an (informal) definition of time-deniable signatures. We refer the reader to the technical sections for more details.
The protocol is parameterized by Δ, the duration of the unforgeability period, and described by the algorithms KeyGen, Sign, AltSign and Verify. The KeyGen and Verify algorithms are the same as standard signature schemes while the Sign algorithm, also similar to the standard notion, takes in as input a message and time stamp to generate a signature on ( , ). The main new component is the algorithm AltSign which takes as input a message ′ , time stamp ′ , signature ( , ) such that ′ ≤ , and uses the verification key to generate a signature ( ′ , ′ ) . For the correctness of the scheme, we require that AltSign generates a verifying signature as long as it's given as input the output of the Sign algorithm, or (repeated applications) of the AltSign algorithm. We now provide an overview of the two key security properties required by our notion.
Unforgeability. This property captures the notion that no adversary capable of computing fewer than Δ sequential steps can generate a forgery. Specifically, we allow an initial pre-processing stage for the adversary where it is not bounded by the number of sequential steps, gathering as much information as it can. At the end of this stage, say at timestamp * , it passes along any information onto the next stage where the adversary that runs in at most Δ sequential steps needs to produce a signature for a message with a time stamp > * .
Deniability. This property asks an adversary to distinguish between a "fresh" signature generated using Sign, and a signature generated using AltSign. We formalize this by defining two experiments, where the adversary is allowed to specify a tuple ( 1 , 1 , 1 = Sign( 1 , 1 ), 2 , 2 ) with 2 ≤ 1 . In the first world, the output is simply the signature 2 = Sign( 2 , 2 , sk), whereas in the second world, the output is 2 = AltSign( 2 , 2 , 1 , vk). We say a TDS is deniable if no computationally bounded adversary can distinguish the two with a significant probability.
We refer to the above description of deniability to be "1-hopdeniable", i.e. a signature generated via Sign is indistinguishable from one generated via AltSign. In the technical section, we extend this notion to " -hop-deniability", which intuitively corresponds to the indistinguishability between a signature generated via Sign and one generated via applications of AltSign.

Construction
Time-Deniable Signatures from Delegatable Functional Signatures. Our construction centers around the following natural idea: with each signature produced by the signer, we leak a restricted signing oracle that can be used to forge later signatures. A signing oracle, as the name suggests, allows a party with access to the aforementioned oracle to sign any message of its choice. For instance, the signing key can be viewed as an oracle since it allows one to sign any message of their choice. A restricted signing oracle limits the messages that can be signed. Thus, continuing with our analogy of signing keys corresponding to an oracle, a restricted signing oracle corresponds to a signing key that is restricted in a fine-grained manner.
When the Sign algorithm generates a signature on message and time stamp , it also reveals a restricted signing key sk that can be used to sign any message ′ with time stamp ′ ≤ . Such a key can then be used by the AltSign algorithm to create forgeries. Revealing the restricted key with the signature, however, allows anyone in possession of the signature to create forgeries during the unforgeability period. To prevent this, we need to hide this restricted signing key until after the unforgeability period, and we do so using time-lock puzzles [36]. Intuitively, a time-lock puzzle allows one to "lock" a secret for a predetermined amount of time (i.e., time parameter). Thus, the output of the Sign algorithm will consist of the signature | | along with the time-lock puzzle containing the secret sk , computed with time parameter Δ. We note that a similar approach has been considered in constructing notions such as epochal signatures [24], and we refer the reader to Section 3 for a more detailed comparison.
To implement restricted signing keys, we turn to the notion of functional signatures (FS) [4,5,9]. Functional signatures are equipped with functional keys sk (instead of "regular" signing keys) such that it allows one to sign ( ) for any message . We consider the following specific function for our application: We call such functions prefix functions (the function prepends the time stamp to the message). It is evident from the above description that with a functional key sk one can generate a signature for any message and time stamp as long as ≤ . 4 For our TDS construction, we leverage specific properties of the functional signature scheme. We provide a more general (and detailed) definition in the technical sections, but for the purposes of the overview, we shall discuss the relevant properties of functional signatures for the specific function described above: (i) delegatability: given a key sk for function , using only public parameters, one can derive a key sk ′ for a function ′ if ′ ≤ ; (ii) key indistinguishability: it should be computationally infeasible to differentiate between a fresh key sk and a key derived; and (iii) unforgeability: it should be computationally infeasible to generate signatures | | unless one has a key sk where ≥ . While delegatability has previously been studied for functional signatures, the notion of key indistinguishability is new to our work. The latter is crucial to achieving deniability.
Putting things together, we have: Sign On input message and time stamp , the Sign algorithm generates the key sk (using the master secret key, see technical section for details), and uses it to compute the signature | | . It then encrypts the key sk within a time-lock puzzle with time parameter Δ. AltSign On input message ′ , time stamp ′ , and signature | | ||TimeLock(sk ), the AltSign algorithm first solves the time-lock puzzle to obtain sk . It next uses the delegation functionality to derive a key sk ′ from sk and then follows the description of the Sign algorithm. A potentially useful property of the above approach is that the sequential part of the computation performed by AltSign, namely, solving the time-lock puzzle, can be reused for computing many forgeries in parallel. This is because once the restricted signing key is obtained -a one-time work, it can be used to compute signatures in parallel.
Intuitively, we prove unforgeability by leveraging the unforgeability of the functional signature scheme and the security of timelock puzzles, while deniability follows from the key indistinguishability property of the functional signature scheme. Prefix Function FS from Hierarchical Identity Based Encryption. We construct functional signatures for prefix functions from Hierarchical Identity Based Encryption (HIBE). At a high level, HIBE is an encryption scheme that allows one to encrypt to identities, (treated as bit strings in this work) such that only someone in possession of the secret key corresponding to the identity can decrypt messages. The hierarchical nature of the scheme allows for the delegation of keys, i.e. if one is in possession of a key for an identity I which is a prefix of an identity I ′ , one can derive the key for I ′ from the key for I. The identities in our setting will correspond to the nodes of a binary tree with nodes labeled by binary strings corresponding to their path from the root (left is 0, right is 1).
HIBE schemes can be used generically to construct a signature scheme [7] -to sign a message , use the HIBE scheme to generate a key for the "identity" with the key corresponding to the signature. The verification of the signature is performed by encrypting a random message to the message (treated as the identity) and using the signature as a key to check whether the decryption is correct.
In our setting, the identities will be the bit strings corresponding to || . Structuring as above has the following benefit -if one were in possession of a HIBE key for a time stamp , then one can derive keys for || for any message since || is "lower" in the hierarchy from . Therefore to sign a message at time stamp exactly it suffices to possess the key for , which serves as the signing key. But recall from the description of in the prior section, the signing key corresponding to should allow one to sign messages for any time stamp smaller than . A naive way would be to generate the signing key for would be to concatenate the HIBE keys for all ′ ≤ , but this is approach is clearly infeasible since the signing key would grow linearly with the total number of possible time stamps.
To overcome this efficiency barrier, we leverage the tree structure of the HIBE scheme with the following insight -it suffices to have a small number of keys as long as we are able to derive keys for any ′ ≤ . At a high level, the signing key sk will consist of keys for all identities that are the left siblings of the nodes along the path from + 1 to the root 5 , resulting in at most log( ) many keys. A detailed description is provided in the technical sections, but here we provide an illustrative example.
In the HIBE identity tree of Figure 1, the key corresponding to 10 is both 0 and 10 . To derive a key for 00 , one executes the HIBE's delegate algorithm using 0 to create the key 00 . In fact, to derive a key for sk ′ from sk for any ′ ≤ one can simply use the HIBE delegation algorithm, i.e. there is no need to run the key generation algorithm afresh.
Looking ahead, we want to allow the adversary to choose the message it wants to compute a forgery on after it has seen other signatures, we require the HIBE scheme to be adaptively secure (i.e. the adversary can choose the identity of the HIBE scheme it wants 5 One can also view it as the nodes in the stack during the depth-first traversal of the (identity) binary tree when node + 1 is visited. sk ∅ sk 0 sk 1 sk 00 sk 01 sk 10 sk 11 Figure 1: Each node in the tree represents a HIBE secret key sk id for the identity id. Trace( = ∅, 10) constitutes the nodes which represent the secret key for 10 , i.e. sk 10 = (sk 0 , sk 10 ). Using this set, all messages with prefixes in the green nodes can be signed.
to break after seeing keys for other identities). HIBE schemes satisfying the necessary requirements can be instantiated e.g., assuming the Decisional Linear (DLIN) assumption on Bilinear groups of prime order [26].

RELATED WORK
Concurrent work. A concurrent and independent work of Arun et al. [3] also studies a notion similar to time-deniable signatures. Similar to our work, they make use of sequentially-ordered computation as a means to enforce time delay during which signatures are unforgeable, but become forgeable afterward. However, their work considers a different model than ours. Specifically, their system relies on the use of unpredictable beacons that are presumably released periodically by some trusted outside source. In contrast, we do not do rely on any randomness beacons or time servers. Unlike our work, they also explore time-based deniability in proof systems. Our work also has many similarities to that of [40]. Particularly our construction is similar to one of theirs in its usage of a HIBE for creating signatures. However, their setting is more limited: they assume that a central server provides key material for forgery so that if the server is knocked offline, deniability does not necessarily hold. For us, the ability to forge solely depends on seeing the signature itself. This change in the model comes with new subtle challenges in the indistinguishability of forgeries and signatures and in formulating security definitions that account for an adversary who has access to a polynomial time forgery algorithm.
Epochal Signatures. Our work is closely related to the prior work on epochal signatures [24]. At a very high level, epochal signatures aim to achieve deniability in a manner similar to ours -by leaking a constrained key. In epochal signatures, (real) time is partitioned into discrete epochs with a key update mechanism at the start of every epoch. Any signature generated during epoch additionally include the keys for prior epochs, allowing for forgery of signatures of any epoch < (but not epoch ). 6 The constructed epochal signature in [24] leaks only a single key with the property that from a key of epoch , , one can retrieve the key of epoch − , − with applications of a "key retrieval" function, but security requires that it is impossible to retrieve keys for epochs > from .
In the following, we describe some key differences between the two works.
Bounded vs Unbounded Use. Unlike our work, the system proposed in [24] is limited to be bounded use. The term bounded here means polynomially-bounded. To be specific, the number of epochs that their system can support is bounded ahead of time by some polynomial in the security parameter. This is an outcome of the run-time of their system setup, which is linear in the number of epochs.
In practice, the granularity of each epoch and the number of epochs must be fixed before the system is initialized, and once the total number of epochs is surpassed, the entire system needs to be reset from scratch. If a system must be reset too often, and resetting is costly (i.e. involves running an expensive key generation algorithm), it may limit the usability of the system. The broad question of bounded vs unbounded use is not new and has been studied in various contexts in cryptography such as bounded vs unbounded query chosen-ciphertext secure encryption [14], depth-bounded vs depth-unbounded hierarchical identity-based encryption [27] and homomorphic encryption [19], bounded-collusion vs unbounded collusion in functional encryption [22,37], and more. In all of these cases, there are significant challenges and overheads (in terms of assumptions, efficiency, etc) in going from bounded system to an unbounded one. As such, we view our construction to have a significant asymptotic improvement over [24] that may translate to concrete practical costs for some large parameter sets. Need for Clock Synchronization. As discussed earlier, the unforgeability notion in [24] requires the participants to have perfectly synchronized clocks. We now demonstrate that if such a requirement is not met, then the consequences can be catastrophic and result in a compromise of security for all future epochs. Specifically, we construct a secure epochal signature scheme where the unforgeability property can be broken when the clocks are slightly out of sync. We also show that the same scheme -translated to the setting of time-deniable signatures -is not secure as per our definition, thus demonstrating that the latter is a strictly stronger notion. In the following, we give an over-simplified presentation of our counter-example to convey the general idea. The full counterexample is more involved (due to technical reasons) and is presented in Appendix K.1.
Intuitively, we exploit the restricted signing oracle in the unforgeability definition of epochal signatures which prevents an adversary from receiving signatures in any epoch outside of a fixed real time window of size Δ . Our epochal signature scheme makes use of a special trigger message * which differs per epoch. If the adversary queries for a signature on message * in epoch , then they receive some "secret information" from the signing oracle which can be used to recover the signing key. If the message space is large enough and * is chosen uniformly at random, this modification would not make our scheme insecure, as an adversary would only have a negligible chance of guessing * . We therefore modify the signing oracle so that, in addition to handing out signatures on messages for epoch , it time-lock puzzle encrypts * with difficulty parameter Δ ′ where Δ < Δ ′ < Δ + . The difficulty parameter of the time-lock puzzle ensures that the puzzle cannot be decrypted within the epoch that it is generated, but can be decrypted just after the epoch concludes. Thus, if there is a clock synchronization issue where the challenger's (the entity generating the signatures) clock is slightly slower, then an adversary can decrypt to obtain * and query the signing oracle on * to obtain the "secret information". In our actual counter-example, this secret information cannot directly be equal to the secret key because the ES scheme must be perfectly deniable even to someone that holds the original signing key. To deal with this, we instead encrypt the signing key with a one time pad that is 2 out of 2 additively secret shared. Querying on different trigger messages reveals different shares of the key. Further details can be found in Appendix K.1.
To argue that this scheme is a secure epochal signature scheme when the clocks are synchronized, we note that in an epochal signature scheme, at the start of an epoch + 1 two things happen: (i) key evolution procedure is applied to the secret signing key to generate the signing key for the next epoch; and (ii) public information pinfo is broadcast. Here, pinfo allows anyone to produce signatures for epochs ≤ without the signing key such that they are indistinguishable from signatures produced by the real signing key (akin to our definition of deniability). In the above scheme, while secret key material is used to key the signing key, this is not revealed as a part of pinfo and does not need to be to create indistinguishable signatures (every field of the signature will be simulatable). Thus, simply having pinfo will not allow recovery of sk.
We now argue that the above scheme is not a secure timedeniable scheme. Briefly, this is due to the pre-processing phase we allow during the unforeability definition. In this phase, the adversary can query the same time stamp multiple times (here roughly the time-stamps correspond to an epoch), and therefore can perform the attack described above by decrypting the time-lock puzzles, making the relevant queries, and using the results to obtain the signing key. The key is then passed on to the "online adversary" who uses it to produce a forged signature. We remark that, again, the above description is oversimplified and the full counter-example is presented in K.1.
We briefly summarize some of the different properties of proposed constructions for expiring signature schemes in Table 1.

PRELIMINARIES
We consider the depth depth( ) of a circuit to be defined as the longest path in the circuit from input wires to output wire. The size of a circuit size( ) corresponds to the number of gates. Sequential time. In this work sequential time refers to the nonparallelizable time it would take any circuit to compute a particular function. A function has sequential time or takes sequential steps if for all circuits that correctly compute the smallest circuits have depth( ) = . This notion attempts to capture inherent limitations in computing a function that cannot be overcome by access to more cores or processors. Time-lock Puzzles. The concept of a time lock puzzle or time lock encryption was first introduced by Rivest, Shamir, and Wagner [36]. We now briefly give a formal description of a time lock puzzle.  Table 1: Comparison of a subset of existing constructions that provide a notion of deniability for signatures. tVDF stands for a trapdoor VDF, while RB is a randomness beacon.
Gen(1 , Δ, ) → : On input a time/difficulty parameter Δ and a solution ∈ {0, 1} , output a puzzle Sol(Δ, ) → : This is a deterministic algorithm that when given a puzzle and the difficulty parameter Δ produces a solution .
Correctness. Correctness requires that for all solutions ∈ {0, 1} and difficulty parameters Δ the following holds: This is a variation of the time lock puzzle definition of [15], where we define security to hold for adversaries of polynomial size instead of super polynomial.

Hierarchical Identity Based Encryption
We recall the notion of Hierarchical Identity Based Encryption (HIBE). A HIBE scheme has the following five algorithms: The setup algorithm generates the master secret key and public parameters. KeyGen( , ) → Generates a key for the identity using the master secret key . Delegate( , , ′ ) → | | ′ : Takes a secret key of some identity and generates a secret key for the identity || ′ . Encrypt( , , ) → The encryption algorithm takes the public key, a message, and an identity and outputs the corresponding ciphertext. Decrypt( ′ , ) → /⊥: The decryption algorithm takes a secret key and a message and outputs the message if the secret key hierarchy level allows decryption of the ciphertext. Remark. Throughout this paper, we will make use of HIBE schemes where Delegate can take in a child identity ′ that is the empty string. In such schemes, ′ ← Delegate( , , ) is a re-randomization of the key for identity . We note that many HIBE schemes can be modified to have this property [20,26,27].
The notion of security we consider for HIBE is the adaptive variant defined in works by Lewko and Waters [27,29]. For a full description of the security game see Appendix A.

Key-indistinguishability.
We additionally require that all polynomial-time adversaries have at most a negligible advantage in distinguishing between keys generated via the KeyGen algorithm and keys generated via the Delegate algorithm even when given access to the master secret key . We define this property using the following HIBE key-indistinguishability game Exp IND HIBE .
The Setup phase is similar to the HIBE security game, except the adversary also gets the master secret key . Similarly, the set S of keys queried and the corresponding query identifier is set to be empty. A bit $ ← − {0, 1} is sampled uniformly. Query phase. In this phase the adversary is allowed to adaptively query a key oracle QK (·) and a challenge oracle O Ch (·, ·, ·) QK (·) takes as input an identity , computes ← KeyGen( , ), selects an identifier id and adds ( , , ) to the set S and responds with (id, ). O Ch (·, ·, ·) takes as input a challenge identifier id, and a pair of identities ( 0 , 1 ) such that 0 is a parent identity of 1 . Where parent identity implies that on the hierarchical identity tree (Figure 1) where the root is , 0 is an intermediate node on the shortest path from 1 to the root. It checks for id in set S and checks that id corresponds to a key query on 0 . If no such id is found, the output is ⊥. Otherwise, compute 0 ← KeyGen( , 1 ), 1 ← Delegate( , 0 , 1 ) and respond with . Guess. The adversary outputs its guess ′ for and wins if ′ = .
The advantage of the adversary A is defined as Adv

Definition 4.2 (Key-Indistinguishability for HIBE). A HIBE scheme is delegated key indistinguishable if ∀ poly size adversaries
The key indistinguishability property can be easily satisfied by many existing HIBE schemes, provided the sub-key components from earlier levels of the HIBE can be re-randomized. Randomization techniques like these have been used to construct anonymous HIBEs in the past [39]. In Appendix G, we show that the prime order variant of the Lewko-Waters HIBE scheme [26] satisfies this property.

TIME-DENIABLE SIGNATURES: DEFINITION
where AltSign 0 ( , ( 0 , 0 , 0 ), {}) = 0 . In words, AltSign is a signature obtained by applying AltSign times to a provided signature 0 on the message 0 , 0 . Then we have the following additional correctness property: Pr Remark. Property 2 assumes that signatures and "forged" signatures used as input to the AltSign algorithm are computed honestly. One can also consider a stronger notion of correctness, where the correctness of AltSign holds even on input signatures (and "forged" signatures) that may not be honestly computed, but nevertheless can be validated by the Verify algorithm. We refer to this as robust correctness.
In this work, we focus on the simpler notion and leave the discovery of schemes that satisfy robust correctness to future work.

Security Property: ( , )-Unforgeability
Our unforgeability notion requires that signatures should remain unforgeable within a restricted time window. We capture this via a security game below: Setup. The challenger generates ( , ) ← KeyGen(1 , ) and gives the verification key to the adversary. Phase 1. The adversary is a tuple of two algorithms, A 0 and A 1 .
In this phase, A 0 is allowed to adaptively query a signing oracle O Sign which is defined as follows. On input a message and a timestamp , the signing oracle O Sign ( , ·, ·) returns the signature ← Sign( , , ). Transfer. The adversary A 0 gives an advice string to adversary A 1 . Phase 2. The adversary A 1 has to respond to the challenger with a forgery while also being allowed to adaptively query the oracle O Sign .
can produce a valid forgery ( * , * , * ) under the following constraints: Note that the depth of A 0 is allowed to be polynomial in the security parameter whereas the depth of A 1 is more strictly bounded.

Deniability
Deniability in our scheme comes from the fact that after sequential time steps, anyone can forge a valid signature under the verification key of the original signer. Consequently, a time-deniable signature scheme should ensure the indistinguishability of signatures generated via the Sign and the AltSign algorithms. Otherwise, the original signer could not deny that it signed a message at a particular time. We present below a security game to capture this idea. Our notion would be meaningful even if the adversary did not have access to the signing key, but we give them it as well in order to capture more powerful attackers. We now define the security game Exp IND DS : Setup. The challenger generates ( , ) ← KeyGen(1 , ) and gives both the verification key and the signing key to the adversary A. They also initialize an empty table T and sample $ ← − {0, 1}. Query Phase. In this phase, the adversary is allowed to adaptively query a signing oracle O Sign ( , ·, ·) and a challenge oracle O Ch (·, ·, ·). O Sign ( , ·, ·) takes as input a message and a timestamp to produce ← Sign( , , ). It randomly chooses a new identifier id (not equal to any previously defined identifiers), records (id, , , ) in T , and returns (id, ) O Ch (·, ·, ·) takes as input a tuple of identifier, challenge message, and time-stamp (id, , ). It checks T for id. If it is not present the output is ⊥. Let ′ , ′ , ′ be the values associated with id. If > ′ the output is also ⊥. It checks that there exists a row in T with (id, ·, ·, ·). Let 0 , 0 , 0 be the values associated with that row. It ensures that ∀ ∈ [ − 1], −1 ≥ , and −1 ≥ * . If any of these does not hold, the output is ⊥. Compute: is returned as the output. Guess. This is again the same as the Exp IND DS game.

DELEGATABLE FUNCTIONAL SIGNATURES
In this section, we define and construct delegatable functional signatures and define an additional key indistinguishability property for this primitive.
Functional Signatures. We start by recalling the notion of Functional Signatures as defined by Boyle, Goldwasser, and Ivan [10]. Correctness and Security. Security for a functional signature scheme is the traditional notion of unforgeability where the adversary is given access to the verification key . For completeness, correctness and the full security definition is included in Appendix B

Key Delegation
In order to create signing keys even without the master signing key, we define an additional PPT algorithm called Delegate. This algorithm takes as input a function , a corresponding secret key , and a restriction of , ′ . The output is a secret key ′ or ⊥. We say that a function ′ is a restriction of another function if the following is true: let : X → Y ∪ {⊥}, then ′ has the same domain and codomain as and ∀ ∈ X either ′ ( ) = ( ) or ′ ( ) = ⊥. This captures the ability to create a signing key that can sign some subset of the same messages as the original key. FS.Delegate( , , , ′ ) → ′ , ⊥: given the verification key , a function , a signing key , and another function ′ output ′ if ′ is a restriction of else ⊥.
For a delegatable functional signature scheme, the following additional correctness property must hold for all functions : X → Y ∪ {⊥} supported by FS, for all restrictions ′ of , and ∀ ∈ X where ′ ( ) ≠ ⊥: The relevance of the delegation property will be demonstrated in our construction. Furthermore, our construction will require yet another property of these delegatable functional signatures.
Key Indistinguishability. We would like it to be the case that keys generated via KeyGen and Delegate appear the same to any adversary, even if they have access to the master signing key and can make adaptive queries. To capture this notion we define the key-indistinguishability game Exp IND FS for delegatable functional signatures. Setup The challenger runs ( , ) ← Setup(1 ) and gives both the master verification key and the master signing key to the adversary. Let T be a table kept by the challenger, initialized to be empty. The challenger also samples $ ← − {0, 1} and keeps this value to itself. Query Phase In this phase, the adversary gets to query two different oracles. (1) Key creation oracle O Key (·), which can be queried on some specific function . On input a function , the key creation oracle checks T for keys on function . Let be the largest value associated with a row containing . Run ← FS.KeyGen( , ) and record ( + 1, , ).
(2) Challenge oracle O Ch (·, ·, ·) where the first input is an identifier and the subsequent inputs are functions 0 , 1 and 1 is a restriction of 0 . The challenger checks T for a row ( , 0 , ·) that has secret key 0 . If no such key exists, the output is ⊥. Otherwise, the oracle computes 0 = FS.Delegate( , 0 , 0 , 1 ), 1 = FS.KeyGen( , 1 ) and returns . Guess The adversary outputs its guess ′ for and wins if ′ = .
The advantage of the adversary A is defined as

Construction for Prefix Functions
We now describe how to create delegatable functional signatures for prefixing functions from hierarchical identity-based encryption. We will be concerned with signatures on functions of the form : {0, 1} ℓ × {0, 1} → {0, 1} ℓ+ that concatenate their arguments. More formally, we consider functions For the sake of readability, in the following construction we abuse notation and write in place of i.e. FS.Delegate( , , , ′ ) is replaced with FS.Delegate( , , , ′ ). We also define the notion of stack trace which will be useful in the formal description of the protocol. Definition 6.3. The stack trace of , Trace( , ) is defined as the set of nodes on the stack when executing a depth-first search to find the leaf node + 1 in a binary tree with some root .
The stack trace can be found efficiently, and as described in the technical overview gives us the set of the ≤ ℓ identity key nodes required to derive all keys corresponding to timestamps up to .
We now give a description of how to build the delegatable FS for the function described previously. Given a HIBE scheme, we consider identities of the form {0, 1} for ≤ . These identities can be described by a binary tree of depth that has 2 leaves where every HIBE identity is either an interior node or a leaf. The identity corresponding to a given node is defined by the path ℎ from the root to : i.e. let 0 . . . −1 , = be the nodes along the path to (including ) then the identity is 1 where is 0 if is the left child of −1 and 1 otherwise. Signing a message for time will correspond to extracting a key for the identity ∥ from the leaf node . To generate a prefix key for where < 2 , for every node in ℎ besides itself, we extract a signing key for the left child of . These keys, along with an extracted key for itself, make up the functional key. For correctness, we note that to sign for any time < we can derive the leaf node if we have a key for an ancestor of in the tree, and for every < it has an ancestor that is a left child of some node along ℎ . Delegation works for a similar reason. We note that for the purposes of key indistinguishability if the intersection of ℎ and ℎ contains some node , the key associated with 's left child from ℎ must be re-randomized in . Verification of a message simply checks the included HIBE key for the identity || by attempting to encrypt and decrypt a random message, as is suggested in [7]. The construction is presented in pseudocode in Figure 2.
Theorem 6.4. If HIBE is adaptively secure then the functional signature scheme for prefix functions constructed in Figure 2 is unforgeable.
For a proof of Theorem 6.4 see Appendix E. Theorem 6.5. If HIBE is key-indistinguishable then the scheme in Figure 2 satisfies the functional signatures key-indistinguishability property.
For a proof of Theorem 6.5 see Appendix E.2.

CONSTRUCTION OF TIME-DENIABLE SIGNATURES
This section describes our construction of time-deniable signatures from key indistinguishable, delegatable FS for prefix functions and time lock encryption. To sign a message at timestamp , we first use the master signing key to construct a signing key for the function . This key is then used to sign the message and is time-lock encrypted to produce a ciphertext that is sent along with the signature. The alternate signing algorithm decrypts the ciphertext, uses the delegate algorithm to produce an appropriate signing key for ′ with ′ ≤ , and then signs the message and time-lock encrypts the signing key. For the security of the scheme to hold, the parameter for the time-lock puzzle Δ cannot be precisely the same as . The intuitive reason behind the difference is that forging involves not just breaking the time lock but also executing other algorithms. Let |A.B| denote the depth of the circuit that computes algorithm B of cryptographic primitive A and ( ) = |FS.Verify|+2+1+|FS.Sign|+|FS.KeyGen|+|TimeLock.Gen|. Our construction is described in Figure 3 and uses ( ) to define Δ. For proofs of unforgeability and deniability see Appendix D.

SYSTEM INTEGRATION
We now give a high-level description of a system that could utilize time-deniable signatures: electronic mail. We first define the two main actors in any signature scheme.
Signer: The signer is a party that publishes messages that can later be authenticated. In the setting of email, this is usually a domain owner that sets up a DKIM record to sign outgoing mail e.g. Google. Instead of using a regular signature scheme, they would run the TDS.Sign algorithm using the mail as the message the timestamp they are signing the message at, and a signing key produced by the TDS.KeyGen algorithm. Verifier: The verifier is a party receiving the message and looking to verify its authenticity. In our setting, this would be a mail server accepting inbound mail and attempting to verify that the message is from the claimed domain. Using DNS, the mail server pulls the relevant key for verification. In this case, the key is a TDS verification key, and this key is used to run the TDS.Verify algorithm. The server would also check when the message was signed and the time parameter Δ to determine if the message is too stale to check for authenticity.
Based on this they would decide whether to forward mail to users or not.
Note that the algorithm TDS.AltSign never needs to be run by any party. Just the existence of the algorithm itself is enough to cast doubt on any message sent longer ago than () − Δ. We now consider two different adversaries which are common in such a setting. Forger: The forger is a party who sees multiple messages and attempts to construct one that verifies without having access to a signing key. In our setting, a forger gets to see old keys which would allow them to sign messages "from the past", but these messages have already expired authenticity and cannot be verified. Therefore, we consider forgers who are trying to sign messages for the current signing window or into the future. In the email setting, this could be a smallscale adversary such as a random hacker spear-phishing someone, or a more well-equipped adversary like a nationstate-funded attacker. Detective: The detective is a party tries to discover whether or not some message from the past was sent by the signer or a different party. The message is guaranteed to have been sufficiently far in the past that Δ has already passed. In the email scenario, this is equivalent to a reporter who discovers emails -perhaps through a leak -and tries to verify whether they came from the claimed domain. The detective is not a one-shot adversary and may get access to multiple signed messages over time.
We now make some remarks on the forger and detective. Both detective and forger may induce the signer to sign messages of their choosing. The forger may do some pre-computation work before attempting to attack a scheme, but once the forger decides to attack they must find a validating signature before the time window expires. The detective is a long-lived adversary who may even recover the entire signing key in the future. Even in this scenario, it should not be possible for the detective to distinguish a true signature from a forged one. Our one requirement is that they cannot see the message before its time period has expired, or gain access to a proof that the message existed starting at some time period. This problem appears to be inherent for all schemes aiming to achieve similar properties to time-deniability.

IMPLEMENTATION AND EVALUATION
Implementation. To demonstrate the efficiency of our scheme, we implemented it in python. For our time lock puzzle, we modified an existing, open-source implementation of an RSW time-lock puzzle [25]. Our timestamp supports 2 16 different values which is approximately equivalent to what is supported by [40]. This is reasonable given at least some motivating applications (i.e. email), where frequent key rotation is done for domains and coarse granularity may be acceptable. Construction of FS. To instantiate our functional signature scheme, we need a HIBE that is both key-indistinguishable and adaptively secure. We consider two different HIBE schemes: one a variant of the Unbounded HIBE from Lewko and Waters [28] due to Lewko [26,28], the other a HIBE from Chen et al. [13]. Both schemes are adaptively secure and have tight reductions. We prove they satisfy the key-indistinguishability property of Section 4.1.2 in Appendix G.1. For Lewko's scheme, we tweaked an existing python implementation [33] and instantiated using the curve SS512. For the scheme of Chen et al, we modify an existing IBE implementation from the same paper in the Charm library [2] and instantiate with BN254. Different curves are necessary since Lewko's construction only works with symmetric pairings whereas Chen et al use asymmetric ones. We hereafter call these constructions L−SS512 and CLLWW−BN254. The curve SS512 natively offers ≈ 80 bits of security while BN254 offers ≈ 110 bits. Setting Parameters. There are two main concerns that come with implementing time-based crypto assumptions: one is capturing the speed-up offered by parallelism, the other is accurately estimating the fastest real-world time to do the computational task the assumption is based on. On the first point, to the best of our knowledge, there are no known improvements from bounded parallelism against the RSW assumption. For the second, recent results [32,44] suggest that an FPGA implementation can achieve ≈ 2 24 squarings per second and an ASIC ≈ 2 28 squares per second. For our implementation below, we benchmarked the cost of computing squares modulo a 2048-bit composite on our machine. This corresponded to roughly 5,883,206 squares per second which is a factor of 4 less than the FPGA cost reported above. Experimental Evaluation. Experiments were done on an Intel Xeon E5 with 500GB of memory, running Ubuntu. Our implementation uses neither multi-threading or multiprocessing. Estimates were obtained by running each algorithm 500 times and taking the median. For the rest of this section, let denote the arity of the tree and be the depth. The timestamp value in our experiments is chosen uniformly at random per each run, as signing time and signature size differ significantly depending on the value of .
We begin the analysis by examining the effect of varying . It is an important parameter for our scheme because and must satisfy > 2 16 and together determine the efficiency of signing and verifying. To be explicit, signing consists of extracting at most · ( − 1) keys from the HIBE tree where each key is ( ) group elements long and requires ( ) work to generate. Thus ( 2 ) work must be done in signing where = ⌈log (2 16 )⌉. This quantity is minimized when is close to 7 meaning that signing time and signature size are optimal when = 7 as can be seen in Figure  4 and Appendix H respectively. Although we do not depict it, larger values of always result in a decrease in verification time since verification depends only on . Our microbenchmarks are presented in Table 2 for = 7. The superiority of the signing algorithm in CLLWW−BN254 to L−SS512 can be attributed to the use of asymmetric over symmetric pairings and because in L−SS512 each level of the HIBE adds ten group elements to the HIBE key whereas in CLLWW−BN254 it only adds four. Because signing mostly consists of creating these keys, it heavily impacts performance and the size of the signature itself.

CONCLUSION
In this work we introduced a new notion of deniable signatures that provides strong unforgeability and deniability guarantees without requiring the signer to periodically publish secret key material. We show how to realize our primitive using time lock puzzles

A HIBE SECURITY
We consider HIBE security similarly to the work of Lewko and Waters [27,29] using the following security game played by a challenger and an adversary.
-Setup The challenger runs ( , ) ← Setup(1 ) and gives the public parameters to the adversary. Let set S be the set of private keys that the challenger creates. At the beginning, S = ∅.
-Phase 1 In this phase, the adversary gets to make three types of queries. (1) Create queries QC( ), which are made on some specific identity . The challenger adds the keys for this identity to the set S. Note that the adversary does not get these keys. (2) Delegate queries QD ( ), which are made on some identity such that the corresponding keys are in the set S. The challenger adds the keys corresponding to the delegated identity ′ and adds them to the set S.
(3) Reveal queries QR ( ), which are also made on some identity such that the corresponding keys are in the set S. In response, the challenger gives the corresponding keys to the adversary and removes them from the set S. -Challenge The adversary gives the challenger messages 0 and 1 and a challenge identity * . The challenger responds with a random ∈ {0, 1} and encrypts under * and sends the ciphertext to the adversary.
-Phase 2 The adversary gets to query the challenger similar to Phase 1. -Guess The adversary outputs its guess ′ for and wins if the following conditions are satisfied: (2) The challenge identity * should satisfy the property that no revealed keys, in either of the query phases, belong to an identity that was a parent of * and the * 's keys shouldn't have been revealed.

B FUNCTIONAL SIGNATURES
Correctness. Correctness requires that any signature output from the Sign algorithm on a valid functional key and a message verifies correctly. More formally, for all supported functions , for all messages , Security. For completeness, the unforgeability security game Exp UNF FS between a challenger and adversary A for functional signatures is provided below.
Setup. The challenger generates ( , ) ← Setup(1 ). They also initialize an empty table T . is given to adversary A. Query Phase. In this phase A gets access to a key oracleŌ Key and a signing oracleŌ Sign .
(1)Ō Key ( , ·, ·) takes as input function description and an identifier . The challenger checks if there is a row in T corresponding to ( , , ·). If such a row exists then return the corresponding secret key sk . Otherwise generate ← KeyGen( , ), record ( , , ) in T and return .
(2)Ō Sign ( , ·, ·, ·) takes as input a function description , an identifier , and a message . If a row in T corresponds to ( , , ·) then use the secret key specified in that row. Otherwise, generate ← KeyGen( , ) and record ( , , ) in T . Let ( ), be the output of Sign( , , ). Return to A.

C ON THE NECESSITY OF TIME-LOCK PUZZLES
Our construction of time-deniable signatures makes uses of timelock puzzles to achieve short-term unforgeability. We show that the use of such a primitive is to an extent inherent. Namely, assuming extractable witness encryption [18,21], we show that time-deniable signatures imply time-lock puzzles.
We demonstrate this implication in Appendix I. We remark that while extractable witness encryption is a strong tool, it alone is not known to imply time-lock puzzles. 7

D PROOFS FOR TIME DENIABLE SIGNATURES
Theorem D.1. The time-deniable signature scheme presented in figure 3 is unforgeable.
In the discussion that follows, let the output of a hybrid game H be the output of the challenger. We prove the theorem statement using a hybrid argument where H 0 represents the original ( , )-unforgeability game. Where details are omitted in the hybrid description of H , it is assumed they are the same as in H −1 .
Adv H 0 A ( ) Note that the win condition is checked whenever the challenger "correctly guesses" how many queries will be made by the adversary in the second phase. Let be the number of queries made by A ,1 , where 0 ≤ ≤ ( ). This must hold since the adversary cannot make more queries than its size dictates. Therefore, Description of D on input from H −1 or H : Notice that D's advantage in distinguishing is dependent on Adv H A ( ) − Adv H −1 A ( ) and that the depth of D is 2. We will now use D and A to construct an adversary B ′ .
Description of − TimeLock adversary B ′ : • Honestly run the FS.Setup algorithm and answer all queries from A ,0 honestly • For the ℎ query from A ,1 : , send the challenger ( −1 , 0), and receive . Set = .
• From A get output and give to D. Get ′ from D. Output ′ to the challenger.
Define We now argue that the advantage of any adversary in H +1 can be translated into equivalent advantage against the unforgeability scheme.
Analysis. We now show that if A is successful, then C must be as well. Say A returns a forgery * , * , * = ( , ). In order for A to be admissable, it must be true that A never received a signature with ≥ * during the first phase and during the second phase there was never a query for ( * , * ) specifically. The first point implies C never queries for a secret key for a function where ≥ * so is a valid signature to give back to the functional challenger. The second point means that A is not giving C a signature that C asked for from the FS challenger with some mauled ′ where ′ is an incorrectly structured puzzle or does not hide the right secret key. Therefore if A returns a valid forgery, then C returns a valid forgery and the claim follows.
Theorem D.2. If the underlying delegatable functional signature scheme is key-indistinguishable then the constructed time-deniable signatures scheme satisfies the deniability property.

E PROOFS FOR DELEGATABLE FUNCTIONAL SIGNATURES
Theorem E.1. If HIBE is adaptively secure then the functional signature scheme for prefix functions constructed in Figure 2 is unforgeable.
We prove this by showing how to use an adversary A who succeeds with non-negligible advantage in Exp UNF FS to construct an adversary B which succeeds with non-negligible advantage in the HIBE unforgeability game.
Description of B • In the setup phase, initialize empty table T and receive from HIBE challenger and forward it to A. be the list of keys associated with that row. Let ′ be the key in associated with an identity that is the prefix of¯. Compute¯∥¯← HIBE.Delegate( , ′ , suffix( ′ ,¯∥¯) where suffix omits the prefix ′ from¯∥¯. Return¯∥-Otherwise, use the algorithm FS.KeyGen in Figure 2 replacing HIBE.KeyGen( , id ) with QC(id). Finally query QD (¯∥¯) and do a subsequent reveal query QR (¯∥¯) to get¯∥¯. Return¯∥¯to A.

Analysis
In order to be an admissable adversary, A must return a signature * and an * that verify where they do not hold a functional key that has * in its range. The keys that have * in their range are of the form where ≥ . In other words, these functional keys are those that contain some HIBE secret keys that are prefixes of the identity and no other functional key has such prefix HIBE keys by the design of the construction. Therefore if A is admissible, * = ∥ will be a valid identity to challenge on.
If A is successful, then * passed verification meaning for a random message it acted as a secret key for the identity ∥ . This implies with high confidence that it is in fact the secret key for this identity. Decrypting with the secret key for identity ∥ the ciphertext will be successful with overwhelming probability and therefore most of the time when A succeeds B succeeds as well.
In any other circumstance, when A is either not admissible or does not return a valid forgery, B catches this and responds with a uniform bit. Thus the theorem statement follows.
Theorem E.2. If HIBE is key-indistinguishable then the scheme in Figure 2 satisfies the functional signatures key-indistinguishability property.
We prove this by showing how to use an adversary A who wins the delegatable functional signatures key-indistinguishability game Exp IND FS to construct an adversary B which wins the HIBE key-indistinguishability game Exp IND HIBE . • B receives the keys ( , ) from its challenger and forwards it as is to A.
• After this, in the query phase, when A makes aŌ Key (·) query for time , adversary B computes ← Trace( , ) where is the position of in the HIBE hierarchy tree. This gives B the list of nodes on which it queries the key oracle QK (·). Each of the QK (·) query response has an identifier id along with the key for a node . B maintains a table with entries of the form where id ′ represents the counter value corresponding to this particular query from A. B sends to A. This is the response A expects as B simulates the FS.KeyGen( , ) algorithm with its queries to the HIBE key oracle.
• When A makes a FS challenge oracle O Ch (·, ·, ·) query with a tuple of the form ( , 0 , 1 ), B performs the following operations: -Check that 0 > 1 and there is a row starting with ( , 0 ) in its table, otherwise return ⊥. This guarantees that on the shortest paths from the leaf node 1 to the root in the HIBE hierarchy tree (Figure 1), there exists an element such that its corresponding HIBE key is present in the set 0 representing the FS key for 0 . -Compute , ← findPrefix( , 1 ) and ′ ← Trace( 0 , 1 ) which is the trace of leaf node 1 in a tree where the root is 0 , the first bits of 0 . Rerandomize the key by computing ′ ← HIBE.Delegate ( , , ), similarly rerandomize all the keys in set 0 upto the 'th position.

F PROOF OF K-HOP DENIABILITY
Theorem F.1. Any time-deniable signature scheme satisfying the deniability property as defined in definition 5.4, also satisfies the -hop deniability property as defined in definition 5.5.
We prove this by a hybrid argument, starting with the security game where the challenger chooses = 1 and ending with one where they choose = 0. Because the advantage of A will change negligibly between hybrids, we will be able to say that the difference between the output of the adversary when = 0 and = 1 is negl. which is equivalent to adversarial advantage being negl. within a factor of 2. Where details are omitted in a description of hybrid H it is assumed they are the same as H −1 .
Let H 0 be the k-hop security game with = 1 and consider the sequence of hybrids where ← Sign( , , ).
In the discussion that follows, let Adv H A ( ) be the advantage of Analysis We now analyze B's success probability. In the discussion that follows, let be the bit chosen by the challenger in the

G LEWKO'S PRIME-ORDER HIBE SCHEME
This is a description of Lewko's [26] prime order translation for an unbounded HIBE scheme. This scheme performs some operations over vectors of -dimensional space, similar to Lewko's work we describe the scheme for = 10.

Time-Lock Encryption from Time-Deniable Signatures.
To construct a time-lock encryption scheme using deniable signatures and extractable witness encryption, consider ( , ) ← DS.KeyGen(1 , ( )) and ← DS.Sign( , , ). Witness encryption schemes allow encrypting a message to an instance of an NP language and allows decryption using a valid witness such that ( , ) ∈ . For a detailed discussion on witness encryption and extractable security we refer readers to the work [21] of Goldwasser et al. Now consider a witness encryption scheme which encrypts to statements of the form = ( , , , ) for a relation where for witnesses of the forms = ( * , * , * ), ( , ) ∈ if DS.Verify( , * , * , * ) = 1. We also provide the intuition behind why this scheme is secure. The time-lock encryption algorithms proceed as follows (2) TL.Decrypt( , ): Since it takes time to create * from , after time a valid witness is available to run the decryption algorithm for WE with witness ( * , * , * ). Output ′ ← WE.Decrypt( , ).
The intuition behind the security argument is essentially that no admissible adversary should be able to distinguish an encryption of 0 from an encryption of 1 as this adversary is depth bounded. Otherwise, such an adversary computes ∈ , i.e., a different signature * on some message, timestamp pair ( * , * ) by performing significantly less operations than the number of operations required. This adversary is solving the time-lock puzzle in sequential time less than . Given such a distinguishing adversary we can leverage the extractor for witness encryption to break the unforgeability property for deniable signatures.

J ON THE NECESSITY OF SECURE TIMESTAMPS
Recall that in our definition, the AltSign algorithm takes as input a previously computed valid signature (or forgery). In particular, our notion does not rely upon the use of cryptographic timestamps. An alternative notion discussed in Section 2 is one where AltSign does not require a previously computed signature as input; instead it only uses a timestamp issued by an external server to create a forgery. We argue that in the latter case, the timestamps issued by the server must be cryptographic (and in particular, unpredictable or unforgeable, depending on the implementation).
Suppose this is not the case. Then we can devise a simple attack using the AltSign algorithm to break the unforgeability of the signature scheme. Consider a (non-uniform) adversary A = (A 0 , A 1 ) that wants to generate a forged signature for any message * , and any time-stamp * . Since we allow for arbitrary polynomial time pre-processing in the unforegeability game, A 0 runs AltSign on input * and ( * ) to compute a forged signature, where (·) computes the output of the time server for time * (this also captures the scenario that the time stamp is entirely ignored by Sign/AltSign). Since there is no security property associated with the timestamps issued by the server, (·) is a function that can be computed efficiently, so A 0 is polynomial time.
Let * be the forged signature computed by A 0 , who passes it along to A 1 to output as its forgery. Since the above strategy works for any ( * , * ), and A 1 needs only a single computational step (to output the forged signature received from A 0 ), this attack constitutes a valid forgery of the time-deniable signature scheme.

K EPOCHAL SIGNATURES
We recall the unforgeability game and definitions from the work [24] by Hülsing and Weber. An epochal signature scheme ES has the following four algorithms: ES. KeyGen(1 , Δ , , )   The advantage of the adversary A is defined as Adv A (1 , Δ ) = Pr Exp UNF ES (1 , Δ , , ) = 1 . Remark: To the best of our knowledge the authors do not define the function now(). We assume that this is the current time value and hence implies the existence of a wall clock.

K.1 Faulty Epochal Signature Construction
Given a secure epochal signature construction Σ, we use it to construct another secure epochal signature scheme Σ ′ which has undesirable properties as discussed in Section 3. However, due to certain properties implicit in the definition of an epochal signature scheme, the scheme we present as the counter example is fairly intricate. We begin with an informal description of the counter example, which suffices for a relaxed notion of epochal signatures. We build upon this to present our final scheme Σ ′ -we formally argue that Σ ′ (i) is a secure epochal signature scheme; and (ii) is not a time-deniable signature scheme.
We first consider a counter example that satisfies a weaker but still reasonable notion of deniability where the judge never gets access to any secret key material. In this setting, our counter example is fairly simple: each epoch has a special trigger message * associated with it. If a signature ever needs to be made on * , in epoch , then the signature contains the master secret key of the scheme. Included with every signature is a time lock puzzle that holds the special trigger message * , where the difficulty parameter on the puzzle is slightly more than Δ . It is straightforward to see that this scheme is still secure under the ES unforgeability game: the epoch will always expire by the time the adversary could attempt to use * by solving the time lock puzzle. However, in the unforgeability game of time-deniable signatures, such a scheme is trivially defeated by an adversary in the pre-processing phase since the time lock puzzle can be solved during this phase.
The main problem with this counter-example is that the construction does not satisfy perfect deniability. Perfect deniability requires that one can simulate a single signature perfectly without revealing whether or not the signature was a forgery. Specifically in our counter example, we must ensure that one reply can never give away . To accomplish this, we encrypt under a key that is (2,2) secret shared. In order to recover the key, the adversary must make two queries which is explicitly disallowed in the deniability definition of epochal signatures. This ensures that only one share of the key is ever recovered, and thus we can simulate this share correctly without knowledge of . The final construction is presented in Figure 6. We would like to emphasize that this counter example is meant to show weaknesses in the unforgeability definition and that almost all of the complexity is added because of the deniability definition.
Theorem K.2. The epochal signature construction in figure 6 is unforgeable under the security game of definition K.1 Proof. We prove this by a standard hybrid argument starting with the real unforgeability game of H 0 . For any hybrid H , the output of H is considered to be the challenger's output bit i.e. the adversary's success probability. This is related but not necessarily equal to Adv H A . Therefore, the statement H ≈ H +1 means that the probability of A succeeding between two different hybrid games is negligible. For security, we want Adv H 0 ( ) ≤ negl( ) where negl( ) is a negligible function in the security parameter . Where details are omitted from the description of hybrid H , it is assumed that they are the same as in H −1 .
H 1 : Guess the challenge epoch * of A by uniformly choosing an epoch in {1, . . . , } where is the maximum number of epochs. If the guess is incorrect, the challenger's output is 0. Analysis: Let 0 , 1 , 2 , 3 be the ephemeral randomness used in an epoch . Consider the following hybrid, Suppose this is not true and there exists a distinguisher with non-negligible advantage that distinguishes between A's success probability in H 2 , −1,1 and H 2 , ,0 that outputs 0 when it thinks A's success is from H , −1,1 and 1 otherwise. Consider the following TimeLock adversary B which uses A and to break the security of the TimeLock.
• For the ℎ query in , sample uniformly from R subject to the constraint that is not equal to 0 . Send to the TimeLock challenger ( 0 , ). Receive . Construct , ′ , , ′ as normal and send to A the signature , , ′ , , ′ . • Let ′ be the output of A. Send = ′ to . If outputsr espond withˆ. Analysis: The probability that B wins is equivalent to the probability of distinguishing correctly. Since by assumption has nonnegligible advantage so does B, in contradiction to the security of Proof sketch. The argument here is equivalent to the previous one, except instead of B challenging on ( 0 , ) the challenge is ( 1 , ). □ In order to be a successful adversary, A must run in polynomial time, which means that the number of queries that A can make in any particular epoch is also bound by some polynomial. Therefore, ∀ the sequences (H 2 ,0,0 , H 2 ,0,1 , . . .) must also be bounded by poly( ). Since * ≤ ∈ poly( ) is also polynomially bounded by the security parameter and the last hybrid in sequence − 1 is actually the first hybrid in , there is a negligible difference between H 2 1,0,0 and the last hybrid in the sequence beginning with H 2 * ,0,1 . The probability that A asks for either appropriate trigger in any of the epochs = 1 . . . Recall that in H 3 we know that A will not ask for a query on the trigger message * during the challenge epoch * or any earlier epoch. We can now argue security by reducing to the security of the original epochal signature scheme ES ′ . Let B be an adversary for the ES unforgeability game that is constructed from A in H 3 as described below.
Description of ES adversary B: • Receive from the challenger and pass on to A • For any evolve query asked by A before or during * , make the same query to the challenger. Receive . Re-sample new randomness 0 , 1 , 2 , 3 . Send to A • For any sign query for epoch ≤ * , make the same query to the challenger and receive . Forward to A along with correctly strutured values , ′ , , ′ • When A gives forgery ( * , * ), parse out the first compo-nent¯and give the forgery ( * ,¯) to the challenger First, we argue that A is an admissible adversary in the timedeniable signature unforgeability game. Let A 0 denote the interactions of A with B before epoch * begins. Because for an ES scheme, size(B) ∈ poly( ) and since A is also doing poly( ) work while interacting with B, we have that size(A 0 ) ∈ poly( ). If we let the output of A and B after this interaction be an advice string , we can then split off the rest of A and B's interaction as A 1 . For B to be a successful adversary, they must be able to produce * , * before wall clock time Δ has past since the start of epoch * . Then we have an upper bound on the circuit depth of B from the start of * until termination as * Δ . Since A just forwards queries between the challenger and B the overhead it adds is minimal (on the order of the number of queries made by B) and can be ignored for the sake of this proof sketch. depth(A 1 ) is therefore appropriately bounded as * Δ · ≤ * Δ · · ( ) ( ) . We now argue that if B is successful in its forgery so is A. As said earlier, in order for B to succeed it must produce a valid pair ( * , * ) before the wall clock time bound where validity means that DS.Verify succeeds given the current timestamp is * and that B has never asked for a signature on * at time * . The tuple ( * , * , * ) is thus a valid forgery for A as well.
Theorem K.6. The ES scheme presented in Figure 7 is deniable if the time-deniable signature scheme DS is deniable.
Suppose this is not true and there exists a judge J that succeeds with non-negligible advantage in the ES deniability game. Then we will construct an adversary A that succeeds with non-negligible advantage in the DS deniability game.
Description of A: • Receive , from the challenger. Forward to J .
• Uniformly sample a random message . When J specifies its challenge ( * , 0 , 1 ), query O Sign with ( , 0 + 1 ) and receive (id, ). • Query O Ch with id, * , 0 to get * . Send * to J . If J responds with send to the challenger. J expects to see one of two signatures. One creates the signature by evolving 0 times while the other evolves the key 0 + 1 times and uses 0 + 1 . When the challenger's bit = 0, A produces the output of ES.Sign in Figure 7 which is simply DS.Sign. When = 1, the output is produced by the simulator S in the ES deniability game which is the equivalent to DS.AltSign in our construction. Therefore, the distributions J sees are correct and if J is successful in distinguishing then so is A.