The PETS 2020 talk recordings are on our Youtube channel.
All Times on this page are UTC-3 (Sao Paulo, Buenos Aires)
Other timezones can be found here: UTC, UTC+1, UTC+2, UTC+3, UTC+4, UTC+5, UTC+6, UTC+7, UTC+8, UTC+9, UTC+10, UTC+11, UTC+12, UTC-1, UTC-2, UTC-3, UTC-4, UTC-5, UTC-6, UTC-7, UTC-8, UTC-9, UTC-10, UTC-11, UTC-12
Monday, July 13
Opening remarks 10:40–10:50 [video]
Session 1 10:50–12:30 (Track B goes to 12:55)
Track A: Anonymous communication
Track B: Differential privacy
Track C: Privacy-preserving machine learning
|
Break: 13:30 (or 12:55) to 13:30
Session 2 13:30–15:10
Track A: Deanonymization
Track B: Differential privacy applications
Track C: Mobile
|
Tuesday, July 14
Keynote 11:00–12:30 Michael Kearns [video]
Title: The Ethical Algorithm
Abstract: Many recent mainstream media articles and popular books have raised alarms over anti-social algorithmic behavior, especially regarding machine learning and artificial intelligence. The concerns include leaks of sensitive personal data by predictive models, algorithmic discrimination as a side-effect of machine learning, and inscrutable decisions made by complex models. While standard and legitimate responses to these phenomena include calls for stronger and better laws and regulations, researchers in machine learning, statistics and related areas are also working on designing better-behaved algorithms. An explosion of recent research in areas such as differential privacy, algorithmic fairness and algorithmic game theory is forging a new science of socially aware algorithm design. I will survey these developments and attempt to place them in a broader societal context. This talk is based on the book The Ethical Algorithm, co-authored with Aaron Roth (Oxford University Press).
Bio: Michael Kearns is a professor in the Computer and Information Science department at the University of Pennsylvania, where he holds the National Center Chair and has joint appointments in the Wharton School.He is founder of Penn’s Networked and Social Systems Engineering (NETS) program, and director of Penn’s Warren Center for Network and Data Sciences. His research interests include topics in machine learning, algorithmic game theory, social networks, and computational finance. He has worked and consulted extensively in the technology and finance industries. He is a fellow of the American Academy of Arts and Sciences, the Association for Computing Machinery, and the Association for the Advancement of Artificial Intelligence. Kearns has consulted widely in the finance and technology industries, including a current role as an Amazon Scholar.
Break: 13:30 to 13:30
Session 3 13:30–15:10
Track A: Cryptography
Track B: Privacy attacks
Track C: Tracking
|
Wednesday, July 15
Session 4 10:30–11:45
Track A: Differential privacy and secure multi-party computation
Track B: Smart devices
Track C: Systems
|
Town hall 12:45–12:45
Break: 13:45 to 13:45
Session 5 13:45–15:25
Track A: Secure computation
Track B: Tor
Track C: Social networks
|
Thursday, July 16
Session 6 10:30–12:10
Track A: Payments
Track B: Users
Track C: Web privacy
|
Award session 12:10–12:40 [video]
- The Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies
- The Andreas Pfitzmann Best Student Paper Award
- Best Reviewer Award
Break: 13:40 to 13:40
Session 7 13:40–14:55
Track A: Censorship
Track B: Usability
Track C: Data protection
|
Closing remarks 15:05–15:10 [video]
Rump session 16:10-16:10
Register yourself for a talk
Friday, July 17 — HotPETs
All Times on this page are UTC-3 (Sao Paulo, Buenos Aires)
Other timezones can be found here: UTC, UTC+1, UTC+2, UTC+3, UTC+4, UTC+5, UTC+6, UTC+7, UTC+8, UTC+9, UTC+10, UTC+11, UTC+12, UTC-1, UTC-2, UTC-3, UTC-4, UTC-5, UTC-6, UTC-7, UTC-8, UTC-9, UTC-10, UTC-11, UTC-12
Opening 10:30 [video]
Session 1: My Tool is Cool 10:35 - 11:20
-
ML Privacy Meter: Aiding regulatory compliance by quantifying the privacy risks of ML [video]
Sasi Kumar Murakonda and Reza Shokri
For a safe and secure use of machine learning models, it is important to have a quantitative assessment of the privacy risks of these models, and to make sure that they do not reveal sensitive information about their training data. Article 35 of GDPR requires all organizations to conduct a DPIA (Data Protection Impact Assessment) for systematically analyzing, identifying and minimizing the data protection risks of a project that uses innovative technologies such as machine learning [1, 2]. In this talk, we will present our tool ML Privacy Meter based on well-established algorithms to measure privacy risks of machine learning models through membership inference attacks [3, 4]. We will also discuss how our tool can help practitioners in DPIA (Data Protection Impact Assessment) by providing a quantitative assessment of the privacy risk of learning from sensitive data to members of the dataset. The tool is public and is available at: https://github.com/privacytrustlab/ml_privacy_meter
We will specifically present the scenarios in which our tool can help and how it can aid practitioners in risk reduction. ML Privacy Meter implements membership inference attacks and the privacy risks of models can be evaluated as the accuracy of such attacks against their training data. As the tool can immediately measure the privacy risks for training data, practitioners take simple actions such as finetuning their regularization techniques, sub-sampling, re-sampling their data, etc., to reduce the risk. The tool can also help in the selection of privacy parameters (epsilon) for differential privacy by quantifying the risk posed at each value of epsilon. We will also discuss some requirements of a DPIA (for example, estimating if the processing could contribute to loss of control over the use of personal data or loss of confidentiality or reputational damage) and how our tool can be useful for such assessments. The tool can be used to estimate the aggregate privacy risk, of making a machine learning model public/ providing query access to the model, for members of the training dataset.
- I see a cookie banner -- is it even legal? [video]
Nataliia Bielova and Christiana Santos
To comply with the General Data Protection Regulation (GDPR) and the ePrivacy Directive (ePD), website publishers can collect personal data only after they have obtained a user's valid consent. A common method to obtain consent is through the use of a cookie banner that pops up when a user visits a website for the first time. EU websites often rely on IAB Europe Transparency and Consent Framework (TCF) -- the standardized framework for collecting consent. The IAB Europe is the advertising industry’s primary lobbying organization, and many popular EU websites actually use IAB TCF for example, popular news website https://reuters.com or top cooking website in France https://www.marmiton.org/. The critical problem is that this framework's consent standard is illegal and widely promotes non-compliant ways to collect consent.
Refresher break 11:20 — 11:30
HotPETs Keynote — Keynote by Karen Levy 11:30 [video]
Title: Privacy Threats in Intimate Relationships
Abstract: This talk provides an overview of intimate threats: a class of privacy threats that can arise within our families, romantic partnerships, close friendships, and caregiving relationships. Many common assumptions about privacy are upended in the context of these relationships, and many otherwise effective protective measures fail when applied to intimate threats. Those closest to us know the answers to our secret questions, have access to our devices, and can exercise coercive power over us. I survey a range of intimate relationships and describe their common features. Based on these features, I explore implications for both technical privacy design and policy, and offer design recommendations for ameliorating intimate privacy risks. [Joint work with Bruce Schneier]
Bio: Karen Levy is an assistant professor in the Department of Information Science at Cornell University, and associate member of the faculty of Cornell Law School. She researches how law and technology interact to regulate social life, with particular focus on social and organizational aspects of surveillance. Much of Dr. Levy's research analyzes the uses of monitoring for social control in various contexts, from long-haul trucking to intimate relationships. She is also interested in how data collection uniquely impacts, and is contested by, marginalized populations.
Dr. Levy is also a fellow at the Data and Society Research Institute in New York City. She holds a Ph.D. in Sociology from Princeton University and a J.D. from Indiana University Maurer School of Law. Dr. Levy previously served as a law clerk in the United States Federal Courts.
br>Long break 13:30 — 13:30
Session 2: There Must Have Been a Mix-Up 13:30 — 14:15
-
Simulation for mixnets [video]
Iness Ben Guirat, Devashish Gosain, Claudia Diaz
How can we design an anonymous communication network (ACN) architecture, such as a mixnets, that satisfies a set of requirements and threats? Although many security protocols can be shown to be secure via cryptographic proofs, privacy is a complex subject that can not be easily reduced to cryptographic proofs, as privacy has to deal with the context and the flow of information. Previous work in the designs such as onion-routing and mixnets have produced a large variety of systems, but comparison between these systems is difficult and often can not, due to the complexity and real-world requirements, be done with purely analytic or theoretical methods. With our general-purpose simulator for mixnets, we can evaluate the entropy for a wide variety of parameters across mixnets designs. In general,there has been theoretical results that there is anonymity trilemma between strong anonymity, low bandwidth overhead and low Latency, but these questions have never been approached systematically from a practical perspective for mixnets, which the simulator allows to be done over large classes of design and comprehensively evaluated. This provides fundamental insights into the design space and trade-offs for mix networking that cannot be obtained without large-scale simulation. Simulation is not only useful to understand the anonymity and security provided by mixnets, but also to deal with real-world engineering such as latency, capacity,scalability, and performance.
- CLAPS: Client-Location-Aware Path Selection in Tor [video]
Florentin Rochet, Ryan Wails, Aaron Johnson, Prateek Mittal, Olivier Pereira
Location-aware path selection has been explored as a promising way to complicate the surveillance of Tor users by powerful actors. We propose the CLAPS framework as a novel way to design path selection algorithms that focus on satisfying security constraints while also optimizing relay usage and keeping the network balanced. We applied our framework to several recent proposals (Counter-Raptor and DeNASA) and demonstrate that the CLAPS strategy leads to substantial improvements, both in network performance and in security, for natural relay configurations derived from recent states of the Tor network.
Refresher break 14:15 — 14:25
Session 3: Whodunnit 14:25 — 15:10
-
The current state of denial [video]
Sofía Celi and Iraklis Symeonidis
What is deniability? Although it might sound trivial, these questions have sparked a series of debates on the privacy community ranging from a legal to a technical perspective. In the context of private communications, this question is notoriously difficult to approach and analyze. To answer it, one needs to look at the broader picture in which deniability applies. In this paper, we aim to provide a notion of deniability by making more explicit the definitions given in the work of Canetti et al. [1], Unger [2], and Walfish [3]. We provide this definition by studying the system model in which deniability in private messaging occurs. We do this by looking at the key features and types of deniability on peer-to-peer communications, and by introducing the notion of an “accuser” as the main adversary and considering judges as oracles. Thus, we create an outline of a general model for defining deniability.
What our paper also aims is to emphasize on the open questions on deniability. For example, whether the current model can be generalized to group messaging, whether metadata can be deniable, and whether both coerced participants can break deniability. Additionally, we will analyze the means to examine the current private messaging applications. Currently, there is limited research that examines the deployed private messaging protocols. We will investigate whether the existing private messaging applications preserve the definitions of deniability and how. Thus, this paper aims to provide the main highlights and directions for these focus-points as an introduction to the current study in progress. Future research will aim at answering the open questions and will examine how private messaging protocols approach deniability.
- Probably private protocols [video]
Ryan Henry
Cryptographers are a curious bunch. In one breath, we insist upon ultra-rigorous threat models and elaborate, if cryptic, formal definitions precision-crafted to thwart implausibly strong attackers hell-bent on expending astronomical resources for the sole purpose of plundering… Alice’s collection of cat memes. In the next breath, we conjure up some bodacious new computational problems and mathemagically “prove” that our constructions are secure in a brave new world (to borrow a phrase from Koblitz and Menezes) where these fledgling problems are presumed—sans compelling evidence—to be wholly intractable.
This talk forges ahead with this peculiar cryptographic tradition by advocating unironically on behalf of the unimpeachably bodacious and indubitably precarious assumption of non-collusion. Specifically, the talk will outline my vision for what I dub the HotPETs probably private protocols paradigm (HP6)†, a moniker I settled on after mistyping “provably” as “probably” for the umpteen-and-a-half’th time‡. Essentially, HP6 protocols vow exceptionally strong privacy if—and only if—Mercury is in retrograde not a single member of some (user-selected) ad hoc cohort happens to be in cahoots with any other. In particular, the HP6 design philosophy pursues extreme performance at the expense of making strong assumptions and leaving risky bets unhedged.
† The extra P is for privacy (à la DP5).
‡ After all, the words “provably” and “probably” are virtually synonymic in the brave new world of bodacious assumptions.
Awards & closing 15:10 — 15:20 [video]
Open-ended virtual ice cream 15:20 — …
Location: Location: somewhere in cyberspace (TBA)
endPage(); ?>