# Approaching the 'lookup singularity': Introducing Lasso and Jolt

*Editor’s Note: In this series, we’re sharing two new innovations that can significantly speed up scaling and building applications in web3: Lasso and Jolt. Together, they represent a fundamentally new approach to SNARK design, improving on the performance of widely deployed toolchains by an order of magnitude or more; providing a better, more accessible developer experience; and making auditing easier. *

*For more on why SNARKs matter, the state of their design today, key concepts to understand, and implementation details for developers and engineers, go read Building on Lasso and Jolt (which also includes an open source implementation). The research papers can be found here (Lasso, Jolt). You can also watch full or short videos explaining Lasso and Jolt, the lookup singularity, and new ways to think about SNARK design at this YouTube playlist. ** *

A SNARK (*S*uccinct *N*on-interactive *AR*gument of *K*nowledge) is a cryptographic protocol that lets anyone prove to an untrusting verifier that it knows a “witness” satisfying some property. A prominent application in web3 is a layer-2 (L2) rollup proving to the layer-1 (L1) blockchain that they know digital signatures authorizing a series of transactions, so that the signatures themselves do not have to be stored and verified by the L1, thus aiding scalability.

Applications beyond blockchains include fast but untrusted hardware devices proving the validity of all outputs they produce, ensuring that users can trust them. Individuals can prove, in a zero knowledge manner, that trusted authorities have issued them credentials that attest, for instance, that they are old enough to access age-restricted content without revealing their date of birth. Anyone sending encrypted data over a network can prove to administrators that the data complies with network policies, without revealing further details.

While plenty of SNARKs have attractive costs for the *verifier*, SNARKs in general still introduce about six orders of magnitude overhead in *prover* computation. The extra work forced upon the prover is highly parallelizable yet a million-fold factor overhead severely limits the applications to which SNARKs can be applied. (For additional details, see my previous posts on SNARK prover performance, security, and development history, as well as common misconceptions about this powerful but complicated technology.)

More performative SNARKs would speed up L2s and could also allow builders to unlock applications that haven’t yet been envisioned. That’s why we’re introducing two new, related technologies: **Lasso**, a new lookup argument that dramatically improves prover costs; and **Jolt**, which uses Lasso to offer a new framework for designing SNARKs for so-called zkVMs and for frontend design more generally. Together they improve performance, developer experience, and auditability for SNARK design, and by extension, building in web3.

Our initial implementation of Lasso already demonstrates speedups in excess of 10x over the lookup argument in the popular SNARK toolchain halo2. We expect speedups of around 40x when the Lasso codebase is fully optimized. Jolt contains additional innovations on top of Lasso, and we expect it to achieve similar or better speedups relative to existing zkVMs.

## Lookup arguments, Lasso, and Jolt

SNARK *frontends* are compilers that turn computer programs into circuits that can be ingested by SNARK backends. Circuits are an extremely limited computational model, in which the “primitive operations” available are simply addition and multiplication.

A key tool in modern SNARK design is something called a lookup argument, a protocol that allows an untrusted prover to cryptographically *commit* to a large vector, and then prove that every entry of the vector is contained in some predetermined table. Lookup arguments can help keep circuits small by efficiently dealing with operations that are not naturally computed by a small number of additions and multiplications (see the example of bitwise operations discussed in our companion post).

However, “if we can efficiently define circuits using lookup arguments only[,] it will lead to simpler tooling and circuits,” as Barry Whitehat of the Ethereum Foundation outlined last year in a vision that he called the lookup singularity, or the moment when we design circuits that perform only lookups. As Barry wrote, lookup arguments “will become more efficient than polynomial constraints for almost everything with time.”

This vision stands in sharp contrast to how things work today, where developers deploy SNARKs by writing programs in idiosyncratic Domain Specific Languages (which compile the program to polynomial constraints) or by directly hand-coding constraints. This toolchain is effort-intensive and offers a large surface area for security-critical bugs to creep in. Even with complicated tooling and circuits, SNARK performance remains slow, which limits their applicabililty.

Lasso and Jolt address all three crucial issues: performance, developer experience, and auditability. Together, I believe they realize the vision of a lookup singularity. Lasso and Jolt also compel a major rethinking of much of the accepted wisdom in SNARK design. After covering some necessary background, this post takes a fresh look at some common views regarding SNARK performance, and explains how they should be updated in light of innovations like Lasso and Jolt.

## Background on SNARK design: Why so slow?

Most SNARK backends have the prover cryptographically commit to the value of every gate in the circuit, using something called a *polynomial commitment scheme*. The prover then proves that the committed values indeed correspond to the correct execution of the witness-checking procedure.

I refer to the prover work that comes from the polynomial commitment scheme as the *commitment costs*. SNARKs have additional prover costs that come from outside of the polynomial commitment scheme. But commitment costs are often the bottleneck. This will also be the case for Lasso and Jolt. If one designs a SNARK in which commitment costs are *not* the dominant prover cost, that doesn’t mean that the polynomial commitment scheme is cheap. Rather, it means that the other costs are higher than they should be.

Intuitively, the purpose of commitments is to securely add succinctness to a proof system through cryptographic methods. When the prover commits to a large vector of values, it is roughly as if the prover sends the whole vector to the verifier, just as the trivial proof system sends the whole witness to the verifier. Commitment schemes achieve this *without *forcing the verifier to actually receive the entire witness – meaning the purpose of commitment schemes in SNARK design is to control verifier costs.

But these cryptographic methods are very expensive for the prover, especially compared to the information-theoretic methods that SNARKs use outside of polynomial commitment schemes. Information-theoretic methods rely only on finite field operations. And one field operation is orders of magnitude faster than the time required to commit to an arbitrary field element.

Depending on which polynomial commitment scheme is used, computing a commitment involves multi-exponentiations (also known as multi-scalar multiplications, or MSMs), or FFTs and Merkle-hashing. Lasso and Jolt can use any polynomial commitment scheme but have particularly attractive costs when instantiated with an MSM-based one, such as IPA/Bulletproofs, KZG/PST, Hyrax, Dory, or Zeromorph.

## Why Lasso and Jolt matter

Lasso is a new lookup argument where the prover commits to fewer and smaller values than prior works. Depending on context, this can lead to a speedup of 20x or more, with a factor of 2x to 4x coming from fewer committed values, and another factor of 10x coming from the fact that, in Lasso, all committed values are small. Unlike many prior lookup arguments, Lasso (and Jolt) also avoid FFTs, which are space-intensive and can be a bottleneck on large instances.

Moreover, Lasso applies even to gigantic tables (say of size 2^{128}), as long as the tables are “structured” (in a precise technical sense). These tables are far too large for anyone to explicitly materialize, but Lasso pays only for table elements that it actually accesses. In particular – and in contrast to prior lookup arguments – if the table is structured, then no party needs to cryptographically commit to all of the values in the table.

There are two different notions of structure that Lasso exploits: *decomposability* and *LDE-structure*. (LDE is short for a technical notion called a *low-degree extension* polynomial.) Decomposability roughly means that a single lookup into the table can be answered by performing a handful of lookups into much smaller tables. This is a more stringent requirement than LDE-structure, but Lasso is especially efficient when applied to decomposable tables.

### Jolt

Jolt (*J*ust *O*ne *L*ookup *T*able) is a new frontend that is unlocked by Lasso’s ability to use gigantic lookup tables. Jolt targets virtual machine/CPU abstractions, also known as instruction set architectures (ISAs). SNARKs supporting such an abstraction are known as zkVMs. For example, consider the RISC-V instruction set (including the multiply extension) that the RISC-Zero project also targets. This is a popular open-source ISA developed by the computer architecture community *without* SNARKs in mind.

For each of the RISC-V instructions *f*_{i}, the main idea of Jolt is to create a lookup table that contains the entire evaluation table of *f*_{i}. So if *f*_{i} takes two 32-bit inputs, the table will have 2^{64} entries, whose (*x*, *y*)’th entry is *f*_{i}(*x*, *y*). If we consider instructions on 64-bit rather than 32-bit data types, the table for each instruction will have size 2^{128} rather than 2^{64}.

For essentially every RISC-V instruction, the resulting lookup table is decomposable, and Lasso applies. (A handful of instructions get handled by applying a short sequence of other instructions. For example, division is handled by having the prover commit to a quotient and remainder, and checking that these are provided correctly via one multiplication and one addition.)

A “circuit” that runs the RISC-V CPU for *T* steps can then be generated as follows. For each of the *T* steps, the circuit includes some simple logic to figure out what is the primitive RISC-V instruction *f**i* to execute at that step, and what are the inputs (*x*,* y)* to that instruction. The circuit then executes *f*_{i} simply by performing a lookup that reveals the (*x*,* y*)’th entry of the associated table. More precisely, Jolt considers a single table consisting of the concatenation of the tables for each instruction – hence the name “Just One Lookup Table”.

## Revisiting accepted wisdom in SNARK design

Lasso and Jolt also upend several pieces of accepted wisdom in SNARK design. Each piece of accepted wisdom is in boldface, followed by how Lasso and Jolt change things.

**#1. SNARKs over large ****fields**** are wasteful. Everyone should use ****FRI****, ****Ligero****, ****Brakedown****, or variants because these avoid elliptic curve techniques, which typically work over large fields.**

Field size here roughly corresponds to the size of the numbers that arise in SNARK proofs. Since very big numbers are expensive to add and multiply, the idea that SNARKs over large fields are wasteful implies that we should strive to design protocols in which big numbers never arise. MSM-based commitments (defined below) use elliptic curves, which are typically defined over large fields (of size about 2^{256}), so these have developed a reputation for inferior performance.

My previous post explained that whether or not it makes sense to use a small field (even when they are an option) is highly application-dependent. Lasso and Jolt go much further. They show that even when using MSM-based commitment schemes, the prover’s work can be made nearly independent of the field size. This is because, for MSM-based commitments, the size of the committed values matters much more than the size of the field in which those values reside.

**Details on MSMs. **A multi-exponentiation of size *n* (also known as an MSM) computes an expression of the form [latex]\prod_{i=1}^n g_i^{x_i}[/latex], where each *g _{i}* is an element of a multiplicative group. The naive multi-exponentiation algorithm does

*n*group exponentiations and

*n*group multiplications. Each exponentiation can be ~400x slower than a multiplication.

Pippenger’s multi-exponentiation algorithm saves a factor of about log(*n*) over the naive algorithm. This can be well over 10x in practice. Moreover, if all exponents are “small” (say, between 0 and 2^{20}) one saves *another *factor of 10x in multi-exponentiation time. This is analogous to how computing [latex]g_i^{2^{16}}[/latex] is 10x faster than computing [latex]g_i^{2^{160}}[/latex]. The former involves 16 squarings while the latter involves 160.

If all committed values are small, Pippenger’s algorithm only needs (about) *one group multiplication* to commit to a value (see section 4 in this paper for a nice exposition).

**A new property of Lasso and Jolt. **Existing SNARKs have the prover commit to many *random* field elements. For example, the prover in a popular SNARK backend called Plonk commits to about 7 random field elements per circuit gate (and additional non-random ones). These random field elements will be large, even if all values arising in the computation being proved are small.

In contrast, Lasso and Jolt only require the prover to commit to small values (Lasso’s prover also commits to *fewer* values than prior lookup arguments). This dramatically improves the performance of MSM-based schemes. At a minimum, Lasso and Jolt should do away with the notion that the community should abandon MSM-based commitments in contexts where prover performance matters.

**#2: Simpler instruction sets lead to faster zkVMs. **

As long as the evaluation tables of each instruction are decomposable, the (per-instruction) complexity of Jolt depends only on the input size to the instruction. Jolt establishes that surprisingly complicated instructions are decomposable, in particular all of RISC-V.

This counters the widespread belief that simpler virtual machines (zkVMs) necessarily lead to smaller circuits and associated faster provers (per step of the VM). This has been the guiding motivation behind especially simple zkVMs, such as the Cairo VM, that are designed specifically to be SNARK-friendly.

Indeed, Jolt achieves lower commitment costs for the prover than prior SNARKs for simpler VMs. For example, for Cairo VM executions the SNARK prover commits to 51 field elements per step of the VM. Cairo’s deployed SNARK currently works over a 251-bit field (and 63 bits is a hard lower bound on the field size the SNARK can use). Jolt’s prover commits to about 60 field elements per step of the RISC-V CPU (defined over 64-bit data types). After accounting for the fact that the committed field elements are small, the Jolt prover cost is roughly that of committing to 6 random 256-bit field elements.

**#3. Breaking large computations into small pieces comes with no performance loss. **

Based on this view, projects today break any big circuit into tiny pieces, prove each piece separately, and aggregate the results via SNARK recursion. They use this approach to mitigate performance bottlenecks.

For instance, deployed SNARKs today are space intensive. As a representative example, Polygon’s zkEVM requires 250+ GBs of space. This is due to applying a SNARK to a “circuit trace” with 2^{23} rows, 1164 columns, and “FRI blowup factor” of 2. This circuit processes a batch of transactions consuming roughly 10 million gas. Other projects use parameters (e.g., field size and FRI blowup factors) that would increase space costs further. Today’s deployed SNARKs also require FFTs, a super-linear operation that can be a bottleneck if the SNARK is applied “all at once” to a large circuit.

The reality is that some SNARKs (such as Lasso and Jolt) exhibit economies of scale (rather than diseconomies of scale as in currently deployed SNARKs). This means that the larger the statement being proven, the *smaller* the prover overhead relative to direct witness checking (i.e., the work required to evaluate the circuit on the witness with no guarantee of correctness). At a technical level, economies of scale come from two places.

- The Pippenger speedup for
*n*-sized MSMs: a log(*n*) factor improvement over the naïve algorithm. The bigger*n*is, the bigger the improvement. - In lookup arguments such as Lasso, the prover pays a “one-time” cost that depends on the size of the lookup table, but is independent of the number of values that are looked up. The one-time prover cost is amortized over all lookups into the table. Bigger pieces means more lookups, which means better amortization.

The prevailing approach to handling big circuits today is to break things into the smallest pieces possible. The main constraint on the size of each piece is that they can’t be so small that recursively aggregating proofs becomes a prover bottleneck.

Lasso and Jolt suggest an essentially opposite approach. One should use SNARKs that exhibit economies of scale. Then break large computations into the *largest *pieces possible, and recurse on the results. The main constraint on the size of each piece is prover space, which grows as the pieces get bigger.

**#4: High-degree constraints are necessary for efficient SNARKs. **

Jolt uses R1CS as its intermediate representation. There is no benefit in Jolt to using “high-degree” or “custom” constraints. Much of the prover cost in Jolt lies within the lookup argument, Lasso, rather than in proving satisfiability of the resulting constraint system that takes the lookups for granted.

Jolt is general purpose, so it doesn’t come with any loss of generality. It thereby counters the intense focus that developers have placed in designing high degree constraints (often specified by hand) in an effort to squeeze out improved performance from SNARKs that are popular today.

Of course, some contexts may still benefit from high-degree or custom constraints. An important example is the Minroot VDF, for which degree-5 constraints can cut commitment costs by a factor of about 3x.

**#5. Sparse polynomial commitment schemes are expensive and should be avoided if possible.**

This is the primary criticism leveled at a recently-introduced constraint system called CCS and SNARKs that support it – (Super-)Spartan and (Super-)Marlin. As discussed in my previous post, CCS is a clean generalization of all of the constraint systems that are popular in practice today.

This criticism is unfounded. Indeed, the technical core of Lasso and Jolt is the sparse polynomial commitment scheme in Spartan (see section 7), called Spark. Spark itself is a generic transformation of *any *(non-sparse) polynomial commitment scheme into one that supports sparse polynomials.

Lasso optimizes and extends Spark to ensure that the prover only commits to “small” values, but the criticism is unjustified even without these optimizations. Indeed, Spartan’s prover commits to fewer (random) field elements than SNARKs such as Plonk that avoid sparse polynomial commitments.

And as discussed in my previous post, Spartan’s approach has additional performance benefits, especially for circuits with repeated structure. For these circuits, addition gates do not increase the Spartan prover’s cryptographic work. That work grows only with the number of variables in a given constraint system, not with the number of constraints. And unlike with Plonk, there is no need for the Spartan prover to commit to multiple different “copies” of the same variable.

***

We are optimistic that Lasso and Jolt will substantially change the way SNARKs are designed, leading to improved performance and auditability. Subsequent posts in this series will dive even deeper into how Lasso and Jolt work. In particular, I’ll explain their use of the sum-check protocol, a key tool with a borderline-miraculous ability to minimize commitment costs for the prover. As we continue to optimize Lasso and build Jolt, we welcome feedback from the community. You can reach me here, where I’ll also announce future work.

***

*Acknowledgments: Justin Thaler developed Lasso and Jolt together with his colleagues **Srinath Setty** (Microsoft Research), **Riad Wahby** (Carnegie Mellon University), and **Arasu Arun** (New York University); the Lasso implementation is by a16z crypto engineers **Sam Ragsdale** and **Michael Zhu**. *

***

Justin Thaler is Research Partner at a16z and an Associate Professor in the Department of Computer Science at Georgetown University. His research interests include verifiable computing, complexity theory, and algorithms for massive data sets.

***

*The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the current or enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.*

*This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) *

*Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.*