Formally-Verified Post-Quantum Cryptography: An Overview
‘Formally verified’ or ‘machine-checked’ is the new gold standard for cryptographic implementations and security proofs. Indeed, the White House recently recognized the relevance of formal methods as a way to ensure safe code and NIST is holding a workshop this summer on the use of formal methods in security certification. However, the term ‘formally verified’ is very broad with different meanings depending on the context. In this post, we will explain these various meanings and why you should care. This is the first post of a series, giving a high-level introduction. In later posts, we will detail different aspects of formal verification.
Cryptography is at the foundation of our digital society. It is used to secure core processes, working as an enabling technology for eCommerce, eGov applications, social online interaction, eHealth, you name it. Cryptography generally does a good job. This makes it hard to demonstrate how wrong things can go when cryptography fails. However, looking at the results of companies or standards bodies that used or are using homebrew cryptography may give you an idea. We picked two recent, impactful examples. It was recently observed that Tetra [https://www.tetraburst.com/], the radio standard widely used by European police and security services, had several vulnerabilities allowing, among other things, for message injection and decryption by unauthorized parties. A bit older, but potentially more impactful, is the DST-80 algorithm used for remote car keys (among others) which had a vulnerability that allowed, among other things, one to steal Tesla cars. These cases are just two of many more using homebrew cryptography that turned out to be insecure and showed again and again that standardized cryptographic algorithms and protocols are to be preferred to homebrew ones.
But, even when using standard cryptography, the more complicated systems get, the easier things go wrong. For example, looking at cryptographic protocols like TLS, IPSec, or SSH, there are many ways to get things wrong. The most pressing risks are:
- Designing an insecure protocol or using an insecure cryptographic algorithm. The community tries to prevent this by using proofs showing that breaking the protocol (or one of the cryptographic algorithms it uses) is as difficult as some well-established mathematical problem such as finding short vectors in a high-dimensional lattice. Spinning this around, this says that as long as the problem is computationally hard, the protocol or algorithm is secure. But these proofs are written by humans (like one of the authors), and humans (especially this author) make mistakes (but also fixes them).
- Implementing an insecure version of a protocol. Even if a specification is proven secure in the above sense (and the proof is correct), this is of little help if an implementation fails to implement the specification and something behaves differently: this can be very difficult to catch, especially for optimized implementations, as there are few experts in the intersection of secure efficient cryptographic implementations and algorithm design. Specifications typically use schoolbook methods for arithmetic operations to be easily understandable. However, in practice, we usually use optimized methods that are more involved, and little mistakes, like handling special cases in the wrong way, can lead to massive security issues. Therefore, ensuring functional correctness (the code does what was specified) is challenging in practice.
- Introducing new weaknesses. Finally, even if a protocol is mathematically secure, and correctly implemented, its implementation may leak secret information, rendering the implementation insecure. The canonical example, which is also the most relevant for software implementations, is timing leakage. If the runtime of an implementation depends on the used secrets, an adversary observing the behavior can figure out the secret key, which has been demonstrated in TLS, ML-KEM, and for lattice-signature signatures amongst others. Similarly, if an implementation has a memory leak, it may even be possible to attack other applications via this implementation. This was recognized in the report by the White House as one of the major problems in cybersecurity.
These three risks were difficult to mitigate for cryptography deployed today, but given that these algorithms have been around for a long time, the community managed to get things mostly right by now. However, this was a process that contained painful lessons, some of which did have quite some impact. One can easily fill a full lecture with the ways TLS, and its predecessor SSL, failed (and I do in my applied crypto course). With the switch to post-quantum cryptography we are starting from base one again. While we can learn from past experience, many aspects have changed. If we approach this the same way we did before, chances are that we will face many painful lessons again.
The goal of formal verification efforts in cryptography is to deal with these challenges by using computers to verify the correctness of human designs and implementations. There are different tools that allow us to machine check cryptographic proofs and verify the security of specifications of cryptographic protocols, verify the functional correctness of cryptographic implementations with respect to a formal specification, and/or verify/ensure the memory safety and secret-independent runtime (also commonly referred to as constant-time) of implementations. There exists a plethora of tools, all of which have different strengths and weaknesses. Most tools focus on one of the above tasks, but many are integrated into suits that aim to cover all aspects. At SandboxAQ, we have started to evaluate different formal verification tools to ensure the security of the algorithms and protocols that we develop. SandboxAQ is part of Formosa Crypto, an effort that maintains a toolchain to do all of the above with practical applications to post-quantum cryptography. We also recently teamed up with Cryspen, which maintains a toolchain that also provides a solution for these and, in particular, can analyze specifications written in Rust.
There are many success stories of formal verification efforts involving SandboxAQ and Cryspen team members. For example, formal methods were widely used in the design of TLS 1.3 to ensure the security of the specification by Cryspen team members and others. SandboxAQ researchers were involved in a full-fledged verification (covering all of the above steps) of the new post-quantum public-key encryption standard, ML-KEM (previously known as Kyber), as well as the verification of the security proof for the new post-quantum digital signature standards, SLH-DSA (previously known as SPHINCS+) and XMSS. Cryspen also demonstrated the use of their HAX toolchain to verify the latter two aspects (functional correctness and constant-time) of another ML-KEM implementation. In the next posts, we will go into more detail, discussing how such projects are approached, what the side conditions are, and what the guarantees are, once for security proofs, and once for functional correctness and implementation security aspects.