Important: Registration is free but mandatory. Registration deadline: Oct 16, 2019, 11:59 PM.

Oct 18, 2019 (Friday) at JP Morgan AI Research, 277 Park Avenue, New York, NY, 10017.

Program

09:30 – 10:00. Introduction/Coffee
10:00 – 10:50. Kobbi Nissim (Georgetown University)
Legal Theorems of Privacy
11:00 – 11:50. Erica Blum (University of Maryland)
Synchronous Consensus with Optimal Asynchronous Fallback Guarantees
12:00 – 02:00. Lunch (not provided)
02:00 – 02:50. Luowen Qian (Boston University)
Adaptively Secure Garbling Schemes for Parallel Computations
03:00 – 03:50. Vitaly Shmatikov (Cornell Tech)
Overlearning Reveals Sensitive Attributes

Registration Very important

Registration is free but mandatory.
Registration deadline: Oct 16, 2019, 23:59 (ET).
Only registered participants will be allowed to enter.

Venue

Address: JP Morgan AI Research, 277 Park Avenue, New York, NY, 10017.

17th floor Conference Center-1705
Visitors need to be accompanied to the room. Please arrive in advance.

[Google Maps]

Organizers

Fabrice Benhamouda (Algorand Foundation)
Nicholas Genise (Rutgers)
Tal Rabin (Algorand Foundation)
with the help and support of Antigoni Polychroniadou (JP Morgan AI Research).

Support

Some travel support for NY CryptoDay is provided by DIMACS/Simons Collaboration in Cryptography through NSF grant #CNS-1523467.

Abstracts

  • Legal Theorems of Privacy / Kobbi Nissim (Georgetown University)

    There are significant gaps between legal and technical thinking around data privacy. Technical standards such as differential privacy and k-anonymity are described using mathematical language whereas legal standards often resort to concepts such as de-identification and anonymization which they do not define mathematically. As a result, arguments about the adequacy of technical privacy measures for satisfying legal privacy often lack rigor, and their conclusions are uncertain. The uncertainty is exacerbated by a litany of successful privacy attacks on privacy measures thought to meet legal expectations but then shown to fall short of doing so. We ask whether it is possible to introduce mathematical rigor into such analyses, so as to make formal claims and prove “legal theorems” that technical privacy measures meet legal expectations. For that, we explore some of the gaps between these two very different approaches and present initial strategies towards bridging these gaps considering examples from US and EU law.

  • Synchronous Consensus with Optimal Asynchronous Fallback Guarantees / Erica Blum (University of Maryland)

    In this talk, we will discuss a Byzantine agreement protocol designed for use cases in which the network may be either synchronous or asynchronous at the time of execution. Protocols designed for synchronous networks are generally insecure if the network in which they run does not ensure synchrony; protocols designed for asynchronous networks are (of course) secure in a synchronous setting as well, but in that case tolerate a lower fraction of faults than would have been possible if synchrony had been assumed from the start. Fix some number of parties n, and thresholds t_a, t_s such that 0 < t_a < n/3 ≤ t_s < n/2. We ask whether it is possible (given a public-key infrastructure) to design a single BA protocol that (1) is resilient to any t_s corruptions when run in a synchronous network and (2) remains resilient to t_a faults even if the network happens to be asynchronous. We show matching feasibility and infeasibility results demonstrating that this is possible if and only if t_a+ 2t_s< n.

    Joint work with Jonathan Katz and Julian Loss.

  • Adaptively Secure Garbling Schemes for Parallel Computations / Luowen Qian (Boston University)

    Garbling scheme is one of the fundamental techniques in cryptography. Recent works constructed several adaptively secure garbling schemes. In this talk, we explore constructing parallelly efficient adaptive garbling schemes. This notion is especially useful when applying adaptive garbling schemes to delegating evaluation of one-time parallel programs. We construct the first adaptively secure garbling scheme based on standard public-key assumptions for garbling a circuit that simultaneously achieves a near-optimal online complexity and preserves the parallel efficiency for evaluating the garbled circuit. We take one step further to construct the first adaptively secure garbling scheme for parallel RAM (PRAM) programs under standard assumptions that preserves the parallel efficiency. Previous such constructions we are aware of is from strong assumptions like indistinguishability obfuscation Ananth et al. (TCC 2016).

    Joint work with Kai-Min Chung.

  • Overlearning Reveals Sensitive Attributes / Vitaly Shmatikov (Cornell Tech)

    “Overlearning” means that a machine learning model trained for a seemingly simple objective implicitly learns to recognize attributes that are (1) not part of the learning objective, and (2) sensitive from a privacy or bias perspective. For example, a binary gender classifier of facial images also learns to recognize races — even races that are not represented in the training data — and identities.

    We demonstrate overlearning in several vision and NLP models and analyze its harmful consequences. First, inference-time representations of an overlearned model reveal sensitive attributes of the input, breaking privacy protections such as model partitioning. Second, an overlearned model can be “re-purposed” for a different, privacy-violating task even in the absence of the original training data.

    We show that overlearning is intrinsic for some tasks and cannot be prevented by censoring unwanted attributes. Finally, we investigate where, when, and why overlearning happens during model training.

%d bloggers like this: