Today, LayerZero announced their new chain, Zero, that features several different technical advances — including a new approach to zero knowledge proofs that decouples transaction execution from verification. It does so with the help of “Jolt inside”.
What is Jolt? Jolt is an open-source, RISC-V zkVM (a zero knowledge virtual machine, or rather, a “succinct” virtual machine) that is fast, secure, and easy to use. It represents a new, state-of-the-art approach to SNARK design that’s based on three years of research & development at a16zcrypto, which we open sourced for anyone to use or develop further. But Jolt is really a story that’s decades, and even millennia, in the making.
Why do zkVMs and SNARK design matter?
Before going into the evolution of SNARK design, it’s worth first talking more about what zkVMs are.
These kinds of virtual machines are colloquially called “zk”VMs, but the property more often at play here is succinctness. While “zero knowledge” is important for privacy preservation, “succinct” means that the proofs are short and fast to verify — they’re two useful but different properties often conflated under one label. (Jolt already has the property of succinctness, and will soon be zero-knowledge too.)
But why do zkVMs matter? zkVMs and — more broadly SNARKs or Succinct Non-interactive Arguments of Knowledge — are an important building block for blockchain scalability, privacy, security, and more. There are countless applications for such proofs, arguments, and zero knowledge (collectively referred to as verifiable computing technologies) in both the crypto industry and beyond.
The industry has taken a complicated approach to building zkVMs until now, due to legacy design architectures and other reasons; more on that below. With Jolt, however, we’ve focused at the outset on a very different approach to SNARK design that allows greater efficiency, usability, and performance.
Simply put, zkVMs are a way to prove you correctly ran a computer program. The advantage of zkVMs over other SNARKs is that they’re developer friendly. By piggybacking on existing computing infrastructure (like the open source LLVM compiler ecosystem), developers can unlock the power of SNARKs while still writing in their programming language of choice, rather than in a domain-specific language (DSL).
This is not unlike much of modern cryptography today — where we have standard, built-out libraries for encryption and digital signatures — which regular developers use daily without having to understand their inner workings. Jolt grants this same level of abstraction to developers: Take existing programs and prove them, without needing to worry about the interaction between the two. This is a necessary condition for any new cryptography to become commodity.
Developers can just do things. With Jolt, developers can take the computer code they’ve already written — without any special expertise around SNARKs — hit a button, and out comes a Jolt proof.
But even with all the advances in Jolt, proving anything moderately complex — like one second’s worth of execution of a single standard CPU core — requires significant computing power. You’d need several GPUs to get a complex proof out in any reasonable amount of time. With Zero, LayerZero ported the Jolt prover to CUDA: Bringing together the highly parallelizable algorithms underlying Jolt, and the parallel hardware of GPUs, to unlock new orders of scaling. LayerZero’s work pushing Jolt to production-grade GPU proving — including collaborating with us to come up with GPU-friendly versions of Jolt’s algorithms — is significant in making zkVMs and proving more scalable.
Open source R&D
Jolt itself is open source, so anyone can use or build upon its novel techniques. Open source is the ultimate multiplier: Sharing work in public allows even more people in the ecosystem to use, re-use, pressure-test/ audit/ fix, improve, and further innovate with it.
It may seem unusual for a venture capital firm to invest in open source, but the structure of modern research and engineering means that most development happens either inside companies — like the corporate labs of yore or foundation labs today — or inside academia. Our purpose in establishing a16zcrypto research was to build an industrial research lab and engineering team that bridges both worlds: of academic theory with industry practice. And as a VC, we’re able to also fund the work no one else can… especially when it’s a contrarian bet.
Supporting a contrarian approach to SNARK design was especially important when it came to Jolt, because it represented a major “paradigm shift” away from previous design approaches. This design evolution was many years in the making.
The story of innovation is often a story about architectural design shifts
To understand the big shift underlying the Jolt approach to SNARK design, we have to begin over 2000 years ago: with the development of formal mathematical proof systems pioneered by the ancient Greeks and later expanded by scholars across the Middle East, Asia, and beyond.
These early proofs — logical deductions written out step-by-step — were written down in formal language or formulas so that anyone could verify them. For example, a mathematician could write a proof down in a “book”, and then another mathematician would read the book one word at a time to verify it. This traditional notion of static, written proofs is captured by the famous complexity class NP of “P vs. NP” fame.
Notably, this traditional approach to proofs was sequential and required turn-taking: It was static, not interactive.
But then fast forward to 1985*, when Shafi Goldwasser, Silvio Micali, and Charles Rackoff introduced the notion of interactive proofs (“IP”). [*It was actually a few years earlier, but the paper was rejected several times before being accepted.] The insight behind this interactive approach to proofs was that if, say, two mathematicians are talking to each other, they don’t need to wait for one to write their proof down and then wait to convince the other that it’s true. Instead, they could ask each other questions in real time; in other words, interacting with each other to get to the truth of the proof.
The immense power of these kinds of interactive proofs — relative to the traditional, static ones pioneered by the ancient Greeks — was not fully recognized until 5 years later, in 1990, when Carsten Lund, Lance Fortnow, Howard Karloff, and Noam Nisan introduced the sum-check protocol: algebraic methods for interactive proof systems. Combined with the follow-on work of Adi Shamir, this quickly led to the foundational result that “IP=PSPACE” — a technical way of capturing the intuitive statement that:
- If the prover and verifier can interact — that is, engage in challenge-response as with traditional proof systems** [**with a ridiculously tiny chance that a lying prover doesn’t get “caught” by a challenge it can’t answer],
- Then vastly more complicated statements can be quickly verified — compared to what’s possible with the traditional, static, written proofs of the ancient Greeks.
In other words: The interaction property gave us a lot of leverage in proof systems. And sum-check is the workhorse that turns that leverage into efficient verification — letting the verifier certify the claimed result, without having to reconstruct the whole computation that’s being proved.
A few years later, Joe Kilian proposed constructing succinct zero-knowledge arguments starting from probabilistically checkable proofs (PCPs). In the PCP view of proofs, a prover (think ancient Greek mathematicians, except now they’re computers) writes down an ordinary proof in a “book”, but in a highly redundant format. Remarkably, this redundancy lets the verifier avoid having to read the entire book: The verifier can sample just a constant number of random locations — like three “words” in the book — and still determine, with high confidence, whether the whole proof is valid.
The catch though is that PCP proofs are long, even though the verification is cheap.
So Kilian showed how to combine PCPs with cryptography, allowing the prover to “commit” to this long book and then reveal only the few sampled words, together with a short cryptographic authentication. The final proof in Kilian’s protocol is effectively just those few words (plus some cryptographic authentication data) — yet they’re enough to convince the verifier that the whole book checks out.
These proofs were all still interactive. Then, Micali showed how to make Kilian’s PCP-based interactive argument non-interactive by applying the Fiat-Shamir transformation. Roughly speaking, Fiat-Shamir “hashes away” the verifier’s random challenges, letting the prover generate them on its own and output the whole proof in one shot.
Legacy architectures linger
Across the history and evolution of proof systems so far we’ve gone from static, to interactive, to probabilistic and non-interactive (PCPs), back to interactive (cf Kilian), back to non-interactive again (cf Micali). SNARKs enter at the end of this arc: By applying the Fiat-Shamir transformation to Kilian’s interactive argument, Micali obtained what we’d now call in modern terms the first SNARK construction.
But in these early PCP-based SNARKs, the prover’s workload was enormous — taking way too long to compute — making them impractical to deploy.
Yet SNARKs were designed this way for decades. Even when the industry tried moving away from the PCP-approach to SNARK design, designers still used related notions (like “linear PCPs” and others) which were just variations on PCP-inspired techniques. And while these approaches did lead to SNARKs with extremely short proofs, they didn’t lead to SNARKs with the fastest possible provers.
SNARK designers were still not going back to the ultimate source — the sum-check protocol — to get faster, more usable provers that were now possible thanks to modern computation.
Taking a step back: Getting to sum-check sooner would have required looking at the history and evolution of SNARKs we’ve outlined above in a non-linear way. In going from (a) interactive proofs —> (b) PCPs —> (c) succinct interactive arguments —> (d) early SNARKs, the industry did the following:
- In moving from (a) interactive proofs → (b) PCPs, the primary challenge was removing interaction from the proof system, while preserving the succinctness of the verification. This led designers to get rid of the sum-check protocol (the interaction).
- But when moving from (b) PCPs → (c) succinct zero-knowledge arguments, the interaction came right back in…
- Only to then be removed with the Fiat-Shamir transform, which helped go from (c) succinct interactive arguments → (d) early SNARKs.
- Examining all this linearly from (a) → (b) → (c) → (d) in hindsight, we can clearly see that SNARK designers essentially cut out interaction twice — once in going from (a) → (b), and then again in going from (c) → (d).
- But if we were going to use Fiat-Shamir to get rid of interaction… we should have just skipped the step in the middle (b), of probabilistically checkable proofs, altogether!
Skipping this step (b) in the middle is the key insight behind Jolt’s approach, which went directly from building SNARKs out of interactive proofs — straight to sum-check.

Why didn’t more people move directly to a sum-check-protocol-based design approach much sooner? Early SNARK designers likely didn’t do this because PCPs and SNARKs seem related on the surface, as they both achieve a notion of succinct verification. As for later on, well, architectures — and misconceptions — can linger.
For us, investing significant engineering and research resources into sum-check based zkVM Jolt was a contrarian bet, because it was going against the decades-long dominant paradigm in SNARKs.
‘Jolt Inside’
The Jolt approach to SNARK design (which itself builds on batch-evaluation and memory-checking arguments like Twist + Shout) is based on interactive proofs and the sum-check protocol.
Now, several years after we started building Jolt, others have begun to embrace the sum-check protocol approach in their designs too. So what are Jolt’s distinguishing features among zkVMs today? Jolt maximally leverages repeated structure in CPU executions. By observing that the “fetch-decode-execute” abstraction at the core of every CPU is amenable to batch-evaluation arguments, Jolt manages to achieve unmatched efficiency with minimal complexity.
Other zkVMs, meanwhile, lean heavily on “pre-compiles” — ASIC-like accelerators for specific subroutines — to achieve reasonable performance. Jolt eschews these pre-compiles since they bring back the downsides of the pre-zkVM approach to SNARK design: Because you need an expert to design these kinds of specialized SNARKs, they’re much more bug-prone, and much less usable by a broader set of developers. With Jolt, we focused on democratizing SNARKs.
The ability to prove correct CPU execution is the exact value proposition of zkVMs as well — and a massive unlock in terms of developer experience — because it allows reusing existing, hardened, general purpose computing infrastructure. The entire world’s computing infrastructure is built to support CPUs, and Jolt squeezes every last ounce of simplicity and performance out of the “structure” inherent to those CPU executions.
Jolt prioritized usability with production-grade performance from the outset: Developers can prove existing programs as-is; no code changes required, even to get fast proving. Rather than forcing teams to refactor applications around “pre-compiles” or special APIs to reach acceptable performance, Jolt keeps original code intact, making it easier to adopt, easier to audit, and cheaper to iterate with.
Achieving that level of “drop-in” compatibility required real engineering: Jolt has integrated ZeroOS from Layer Zero, extending the same no-modifications experience to programs that use system calls. ZeroOS keeps you on upstream Rust — no custom forks, no patched runtimes — so teams can go from an existing Rust codebase to proofs of correct execution in minutes.
Importantly, Jolt is not only faster, it is also simpler. While alternative approaches require zkVM designers to specify a circuit for every primitive instruction of the virtual machine, Jolt does not: In Jolt, each primitive instruction can be specified with about ten lines of Rust code (see more here and here). No circuit, just ten lines of code.
What’s next for Jolt?
We’re already the state of the art on speed. With further optimizations and features, including recursion and zero knowledge — and especially our planned switch from elliptic curve cryptography to lattices — we’ll soon be another order of magnitude faster later this year, not to mention post-quantum as well.
Jolt makes more applications possible. For blockchains, the scalability and decentralization everyone’s been waiting for becomes much more easily deployable. ZK rollups can just work, without months or years of cryptographic engineering.
But with further advancements in Jolt — making fast and simple zkVMs that work on phones and laptops — developers will be able to unlock more use cases on the client-side and for privacy. Privacy-preserving applications on phones, for instance, can go from unmaintainable and barely runnable, to just working out of the box with ease.
Longer term, these proof systems will become a core part of the world’s digital infrastructure, analogous to encryption and digital signatures. This kind of general purpose cryptographic compression — where anyone can prove that they know gigabytes of data satisfying some property, by sending just a 50-kilobyte proof instead of all of the data itself — is such a powerful primitive, that it’s hard to predict what applications people will come up with for it. The possibilities are endless.
Build with Jolt yourself: https://github.com/a16z/jolt
~QED
Acknowledgements: Justin Thaler, Michael Zhu, Markos Georghiades, Andrew Tretyakov, Noah Citron, Sagar Dhawan, Sam Ragsdale + Eddy Lazzarin & Tim Roughgarden
Editor: Sonal Chokshi
***
The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the current or enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.
The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the current or enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.
You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investment-list/.
The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures/ for additional important information.