# An Appetizer to CRYPTO 2024

CRYPTO is the flagship cryptography conference in North America (being the sister conference of Eurocrypt and Asiacrypt). This year’s 44th CRYPTO will be held 18-22 August 2024, as always in Santa Barbara, CA. We are proud that three papers co-authored by SandboxAQ researchers have been accepted to and will be presented at CRYPTO 2024. We explain the main ideas of each of the papers to whet your appetite until the conference.

## On round elimination for special-sound multi-round identification and the generality of the hypercube for MPCitH

*by Andreas Hülsing, David Joseph, Christian Majenz, and Anand Kumar Narayanan*

This work is the result of a collaboration by Anand, Andreas & David with Christian Majenz from DTU in Denmark. The work is concerned with the efficiency and provable security of post-quantum signature schemes like our Hypercube-Syndrome-Decoding-in-the-Head (SDitH) signature that we blogged about previously. More specifically, this is about signature schemes that are built by making (interactive) identification protocols non-interactive and binding this to a message (you may have heard of the Fiat-Shamir transform) and optimizations of these.

Identification schemes are cryptographic protocols where the prover tries to convince a verifier that they know a secret for some public key. To do this, the verifier challenges the prover by sending random challenges, which define tasks that one can only solve when knowing such a secret, but will fail with high probability when not knowing one. To ensure that a secret can be used more than once, a run of the identification scheme must not leak any information about the secret. These protocols are used in many places, including remote car keys, and electric door keys. They can also be used to build digital signatures. For this, the challenges are replaced by the output of a hash function that takes all previously exchanged protocol messages and the message to be signed as input.

In this work, we revisit two previous optimizations that we previously proposed specifically for SDitH and ask the question if they can be applied to other signature schemes. First, we look at the hypercube method mentioned above which allows to boost security of an identification scheme more efficiently than traditional approaches that run several instances in parallel. Second, we observed in a previous work which appeared at Asiacrypt’23 that we can reduce the number of interactions of the SDitH identification scheme before making it fully non-interactive, in a way that allows for an easy security argument. In addition, this “round-elimination” enables the application of a known result that allows to argue security against quantum adversaries (more specifically, security in the quantum-accessible random oracle model, or QROM). While these two optimizations were known to work for SDitH, it was unclear to what extent they generalize to other signature schemes. In this work we show that the hypercube method applies to a large class of MPC in the Head (MPCitH) signature schemes of which there are numerous in the ongoing NIST PQC signature on-ramp. More importantly, we show that the round-elimination technique is applicable for an even bigger class of signatures built from identification schemes. The deciding factor is the soundness structure of the identification scheme, i.e., the way in which different rounds of interaction contribute to detect a cheating prover.

## Quantum lattice enumeration in limited depth

*by Nina Bindel, Xavier Bonnetain, Marcel Tiepelt and Fernando Virdia*

This work is joint work of Nina, Xavier Bonnetain (Université de Lorraine, CNRS, Inria), Marcel Tiepelt (KASTEL, Karlsruhe Institute of Technology) and Fernando Virdia (NOVA LINCS, DI, FCT, Univerisdade NOVA de Lisboa) in the area of lattice cryptanalysis, more concretely on quantum lattice enumeration.

One goal of cryptanalysis in general is to determine what the computational cost is to break a hardness assumption (such as the ‘Learning with Errors’ problem). The computational cost is then used to guide choosing parameters such as the size of cryptographic keys, digital signatures, ciphertexts, etc. Ideally the parameters are chosen such that they are sufficiently secure while being as small as possible to result in more efficient cryptographic algorithms. One way to do this is to come up with efficient algorithms to solve small instances of the hardness assumption to then estimate how these algorithms would perform on instances of the size that is used in modern cryptographic algorithms, like the key encapsulation mechanism ML-KEM selected for standardization by NIST.

Since hardness assumptions used to construct quantum-secure cryptographic algorithms need to be hard, well, against quantum computers, their concrete hardness should be estimated under efficient classical and quantum algorithms. The paper that has been just accepted to CRYPTO 2024 is about the concrete cost estimation of such a quantum algorithm. In particular, it is about the cost estimation of a quantum version of a well-known (classical) algorithm to solve lattice hardness assumptions called ‘lattice enumeration’. Aono, Nguyen, and Shen proposed a quantum version giving a faster algorithm when the size of the quantum computer is not restricted in any way. We wanted to know what quantum speed up remains if we assume a more realistic quantum computer, namely estimating the cost of quantum enumeration under the limitations NIST has defined.

As quantum computers large enough to even run quantum enumeration on very small (yet meaningful) instances do not exist yet, our approach is to provide lower bounds that are generous to the adversary to soundly claim that quantum enumeration costs ‘at least’ a certain number of quantum gates. Of course, there are lower bounds that are obviously correct but not useful. For example, think of an extreme lower bound that at least one quantum operation has to be made. As such, the challenge in this research project was to find ‘meaningful’ lower bounds.

As a case study, the paper estimates the potential cost of quantum enumeration against Kyber (ML-KEM) and finds that the current two larger parameter sets (namely, Kyber-768 and Kyber-1024) likely do not need readjustment of the parameters if quantum enumeration is considered. While our estimates suggest that quantum speedups could be potentially plausible on the smallest of the Kyber parameters, we emphasize that this does not challenge Kyber’s security claims due to how generous-to-the-attacker our bounds are.

## Formally verifying Kyber Episode V: Machine-checked IND-CCA security and correctness of ML-KEM in EasyCrypt

*by José Bacelar Almeida, Santiago Arranz Olmos, Manuel Barbosa, Gilles Barthe, François Dupressoir, Benjamin Grégoire, Vincent Laporte, Jean-Christophe Léchenet, Cameron Low, Tiago Oliveira, Hugo Pacheco, Miguel Quaresma, Peter Schwabe, and Pierre-Yves Strub*

This work is a massive collaborative effort by Tiago and 13 other people from the Formosa crypto project that we are contributing to. It is the second publication out of a bigger effort to have a fully formally verified ML-KEM implementation. Formally verified is a term that in recent years has appeared more and more in the context of cryptography but with different meaning. In the context of this work formally verified means the following: The team convinced a theorem prover (EasyCrypt) that a formal specification of ML-KEM achieves security against active attacks (IND-CCA) under given hardness assumptions, and that two implementations of ML-KEM in the Jasmin programming language correctly implement this specification (often referred to as “functional correctness”). Moreover, the team used features of the Jasmin language to prove that the implementations are cryptographic constant-time (i.e., the runtime is independent of any secret value). In summary, the team did prove security, functional correctness, and cryptographic constant-time for two implementations – a reference implementation and an optimized implementation – and had a machine check these proofs. This massively reduces the chance of critical flaws of the implementations. For now, the result only applies to the two Jasmin implementations and security is only proven in the classical random oracle model (which limits the abilities of quantum adversaries). These limitations are due to constraints in the used tools. However, in future work these results can hopefully be extended to further implementations via equivalence proofs, and the security proofs be lifted to the quantum-accessible random oracle model.

If this topic sparked your interest and you want to know more about how this is done, why one would do this, what we can already achieve, and what needs further work: Keep an eye on this space! We are planning a series of posts with a more detailed introduction to the topic.

Come discuss these research results with us at CRYPTO 2024 while eating the traditional chocolate-covered strawberries!

- While all three papers are joint works with the respective list of authors above, the blog post is solely written by Andreas and Nina and as such does not express the opinion of the other collaborators.