Archive

Uncategorized

Important: Registration is free but mandatory. Registration deadline: Jan 10, 2024, 11:59 PM.

Jan 12, 2024 (Friday) at CUNY Graduate Center Skylight Room, 9th Floor, 365 5th Avenue.

Program

09:30 – 10:00. Introduction/Coffee
10:00 – 10:50. Elette Boyle (NTT)
Efficient Secure Computation via ZK on Distributed Data; The Latest Developments
11:00 – 11:50. Eli Goldin (NYU)
Immunizing Backdoored PRGs
12:00 – 02:00. Lunch
02:00 – 02:50. Keegan Ryan (University of California, San Diego)
Passive SSH Key Compromise via Lattices
03:00 – 03:50. Ben Edelman (Harvard)
Watermarks in the Sand; Impossibility of Strong Watermarking for Generative Models

Registration Very important

Registration is free but mandatory. Registration deadline: Jan 10, 2024, 23:59 (ET). Only registered participants will be allowed to enter.

Venue

Address: CUNY Graduate Center Skylight Room, 9th Floor, 365 5th Avenue.

[Google Maps]

Organizers

Fabrice Benhamouda (AWS Cryptography)
Nicholas Genise (Apple)
Tal Rabin (AWS Cryptography)
with the help and support of Gennaro Rosario (CUNY).

Support

NY CryptoDay is sponsored by Google.

Google

Abstracts

  • Efficient Secure Computation via ZK on Distributed Data; The Latest Developments / Elette Boyle (NTT)

    TBA


  • Immunizing Backdoored PRGs / Eli Goldin (NYU)

    A backdoored Pseudorandom Generator (PRG) is a PRG which looks pseudorandom to the outside world, but a saboteur can break PRG security by planting a backdoor into a seemingly honest choice of public parameters, pk, for the system. Backdoored PRGs became increasingly important due to revelations about NIST’s backdoored Dual EC PRG, and later results about its practical exploitability.

    Motivated by this, at Eurocrypt’15 Dodis et al. [21] initiated the question of immunizing backdoored PRGs. A k-immunization scheme repeatedly applies a post-processing function to the output of k backdoored PRGs, to render any (unknown) backdoors provably useless. For k=1, [21] showed that no deterministic immunization is possible, but then constructed “seeded” 1-immunizer either in the random oracle model, or under strong non-falsifiable assumptions. As our first result, we show that no seeded 1-immunization scheme can be black-box reduced to any efficiently falsifiable assumption.

    This motivates studying k-immunizers for k>=2, which have an additional advantage of being deterministic (i.e., “seedless”). Indeed, prior work at CCS’17 [37] and CRYPTO’18 [7] gave supporting evidence that simple k-immunizers might exist, albeit in slightly different settings. Unfortunately, we show that simple standard model proposals of [37, 7] (including the XOR function [7]) provably do not work in our setting. On a positive, we confirm the intuition of [37] that a (seedless) random oracle is a provably secure 2-immunizer. On a negative, no (seedless) 2-immunization scheme can be black-box reduced to any efficiently falsifiable assumption, at least for a large class of natural 2-immunizers which includes all “cryptographic hash functions.”

    In summary, our results show that k-immunizers occupy a peculiar place in the cryptographic world. While they likely exist, and can be made practical and efficient, it is unlikely one can reduce their security to a “clean” standard-model assumption.


  • Passive SSH Key Compromise via Lattices / Keegan Ryan (University of California, San Diego)

    We demonstrate that a passive network attacker can opportunistically obtain private RSA host keys from an SSH server that experiences a naturally arising fault during signature computation. In prior work, this was not believed to be possible for the SSH protocol because the signature included information like the shared Diffie-Hellman secret that would not be available to a passive network observer. We show that for the signature parameters commonly in use for SSH, there is an efficient lattice attack to recover the private key in case of a signature fault. We provide a security analysis of the SSH, IKEv1, and IKEv2 protocols in this scenario, and use our attack to discover hundreds of compromised keys in the wild from several independently vulnerable implementations.


  • Watermarks in the Sand; Impossibility of Strong Watermarking for Generative Models / Ben Edelman (Harvard)

    Watermarking generative models consists of planting a statistical signal (watermark) in a model’s output so that it can be later verified that the output was generated by the given model. A strong watermarking scheme satisfies the property that a computationally bounded attacker cannot erase the watermark without causing significant quality degradation. In this paper, we study the (im)possibility of strong watermarking schemes. We prove that, under well-specified and natural assumptions, strong watermarking is impossible to achieve. This holds even in the private detection algorithm setting, where the watermark insertion and detection algorithms share a secret key, unknown to the attacker. To prove this result, we introduce a generic efficient watermark attack; the attacker is not required to know the private key of the scheme or even which scheme is used. Our attack is based on two assumptions: (1) The attacker has access to a “quality oracle” that can evaluate whether a candidate output is a high-quality response to a prompt, and (2) The attacker has access to a “perturbation oracle” which can modify an output with a nontrivial probability of maintaining quality, and which induces an efficiently mixing random walk on high-quality outputs. We argue that both assumptions can be satisfied in practice by an attacker with weaker computational capabilities than the watermarked model itself, to which the attacker has only black-box access. Furthermore, our assumptions will likely only be easier to satisfy over time as models grow in capabilities and modalities. We demonstrate the feasibility of our attack by instantiating it to attack three existing watermarking schemes for large language models: Kirchenbauer et al. (2023), Kuditipudi et al. (2023), and Zhao et al. (2023). The same attack successfully removes the watermarks planted by all three schemes, with only minor quality degradation.