Introducing the Nakamoto Challenge: Addressing the Toughest Problems in Crypto

Guy Wuollet

We’re fortunate to have a broad view across the crypto landscape and the opportunity to talk with the best builders, founders, and entrepreneurs. As a result, we hear about many interesting problems that the smartest people are working on. Some are problems that everyone has and are just too challenging to solve, and  some are poorly documented and exist only as tribal knowledge.

We’ve made a list of  hard problems and open questions that we believe can and should be solved to help unlock the future of crypto. Like important nodes in a tech tree, solving each problem below would unblock many builders and developers that create and support decentralized networks. In honor of the whitepaper author that started it all, we’re calling this the Nakamoto Challenge.

Last week we announced that applications are open until October 20th for our next cohort of Crypto Startup School. We’re looking for founders who want to take big, bold swings in crypto and are hellbent on building something transformational. We don’t expect our Nakamoto Challenge to result in fully-formed solutions to these thorny problems, but we do want to activate the smartest entrepreneurs to propose and start building creative new paths to get there, which would advance the whole crypto ecosystem.  The person or company with the best answer to each Nakamoto Challenge question will be fast-tracked to the interview stage of our Crypto Startup School application process

If you have a compelling answer to one or more of our questions, please email them to us at cryptostartupschool@a16z.com

Happy hunting! 

The Limits of Atomic Composability and Shared Sequencing

Problem statement: Perhaps the most-talked-about path to scaling blockchains today is deploying many rollups on Ethereum. While this bootstraps security and decentralization, deploying many disconnected rollups on Ethereum will fracture composability. We believe atomic composability – the ability to send a transaction A that finalizes if, and only if, transaction B is finalized – is crucial. 

Please describe the limits of composability between rollups on Ethereum. Ideally a solution would propose a formal model of rollups and an impossibility result. 

Relevant reading

Shared Validity Sequencing

The Espresso Sequencer

Shared Sequencing: Defragmenting the L2 Rollup Ecosystem

Optimism Superchain and Superchain Explainer

zkSync Hyperchains and Hyperchains documentation

Rollups aren’t real

DePIN Verification

Problem Statement: 

Decentralized Physical Infrastructure Networks (DePIN) represent a class of blockchain applications dealing with physical infrastructure. Whereas smart contract platforms and payments can use classical consensus or validity proofs for trustless computation, DePIN projects often can’t due to scalability constraints and the oracle problem of verifying physical sensor data. 

Current hardware-based approaches to verification include embedding a public/private key pair at the time of manufacture, or building custom hardware with a secure element like a trusted execution environment. Unfortunately, embedding a key pair means that only devices manufactured by certain parties can join the network, adding a level of permissioning, and trusted execution environments both require application-specific hardware and are often vulnerable to hacks.

As existing software approaches like consensus and validity proofs aren’t feasible, and existing hardware approaches have significant downsides, we’re excited about new potential software-based approaches to verification. Some projects have explored the idea of random sampling as a measurement method to ensure that rational participants in a DePIN network are behaving in accordance with the protocol.

The early outline of a random sampling approach to verification usually involves the network generating measurement requests to each provider/validator on the network. If the measurement request is correctly served, the provider receives a larger reward, akin to a block reward. As long as the provider can’t distinguish between a measurement request and a normal request, they are incentivized to correctly respond to each request.  

Without verification, many DePIN networks fall victim to three common incentive challenges: 

  • Self dealing occurs when providers in a DePIN network request services from themselves and receive a block reward or service payment from the network. If providers receive a larger payment than users make — often the case because of early subsidies or block rewards — then it’s profitable to buy service from yourself as a service provider. 
  • Contending with lazy providers who commit to serving client requests but simply don’t respond, or respond with lower quality of service than they committed to. 
  • A provider is willing to lose money to convince a client of a malicious response. Random sampling does the worst job addressing malicious providers as currently outlined, and does a much better job at ameliorating self dealing or laziness. 

Please propose a generalized solution to verification for DePIN projects, or a specific solution for a sub-category of DePIN projects (DeWi, Decentralized Energy, Decentralized ridesharing/delivery, etc.) 

Relevant Reading: 

Engineering Filecoin’s economy

Orchid Whitepaper

Helium Whitepaper

Helium Proof of Coverage

Nym whitepaper

JOLT + Lasso Problem

Problem Statement: 

SNARK virtual machines (VMs) enable highly-scalable decentralized computation, such as blockchains. Jolt is a new model for building SNARK VMs on top of Lasso, a fast lookup argument. We believe Jolt will be the most efficient way to build custom SNARK VMs in the near future. We released a sample implementation for Lasso earlier this year and are targeting a full release of Jolt later this year. 

The efficiency of modern non-interactive proof systems are dependent on the efficiency of their polynomial commitment schemes. Lasso builds on a different lineage of SNARKs than the majority of those in production today. These sumcheck-based SNARKs depend on multilinear polynomial commitment schemes (PCS) rather than univariate. As a result, less analysis has been put into the efficiency properties (for both the prover and verifier) for multilinear polynomial commitment schemes. Section 2.2 of the Lasso paper briefly describes these different PCS. 

Please expand on this section to provide a comprehensive analysis (theoretical and/or empirical) of 3-5 different polynomial commitment schemes and their cost in the context of verification for decentralized systems. We’re interested in both the cost profiles of the prover and verifier directly, as well as the cost of recursively verifying Jolt proofs within existing SNARK schemes, especially those with EVM-compatible verifiers.

Specifically, please detail:

  • Prover compute cost
  • Verifier compute cost
  • Proof size
  • Recursive verifier compute cost
  • Recursive verifier proof size

Relevant Reading:

Lasso Paper

Jolt Paper 

Lasso Repo

Introducing Lasso and Jolt

Understanding Lasso and Jolt

Compliant Programmable Privacy

Problem Statement: 

While most smart contracts and blockchains today are fully transparent, we deeply believe privacy is essential for fully realizing blockchain’s potential as a social coordination tool in building decentralized networks. It’s become apparent that privacy is increasingly complex, and that private smart contracts or payments protocols may need to factor in some KYC, compliance, or illicit finance and sanctions screening features to enable users in different jurisdictions to participate and to limit developer exposure to legal risk. Current approaches include deposit delays, and deposit and withdrawal screening. Existing approaches are made even more complicated by fully-programmable smart contract platforms, where any developer can deploy their own bridge. 

Please provide suggested compliance solution(s) to address illicit finance mitigation for privacy-enabling and programmable smart contract platforms. A solution should eliminate legal and regulatory risks to the greatest extent possible, while maintaining privacy and trustlessness. 

Relevant Reading:

Achieving Crypto Privacy and Regulatory Compliance

Privacy-Protecting Regulatory Solutions Using Zero-Knowledge Proofs: Full Paper

Derecho: Privacy Pools with Proof-Carrying Disclosures

Privacy Pools

Blockchain Privacy and Regulatory Compliance: Towards a Practical Equilibrium

Configurable Asset Privacy for Ethereum (CAPE)

Optimal LVR Mitigation

Problem Statement:

Loss vs. rebalancing (aka LVR and pronounced ‘Lever’) was proposed in a 2022 paper as a way of modeling adverse selection costs borne by liquidity providers to constant function market maker decentralized exchanges (CFMM DEXs). Current work is focused on finding an optimal way to mitigate LVR in DEXs without using a price oracle. 

Please describe the potential mitigations to LVR and argue why your proposed solution is better than all known alternatives. 

Relevant Reading: 

Loss vs. Rebalancing Paper

LVR: Quantifying the Cost of Providing Liquidity to Automated Market Makers

SBC ‘22 LVR Talk by Tim Roughgarden

Automated Market Making and Loss-Versus-Rebalancing

Designing the MEV Transaction Supply Chain

Problem Statement: 

Assuming you could start from scratch, what is the optimal design of the miner extractable value (MEV) transaction supply chain? The process today is most naturally separated into distinct roles for searchers, builders, and proposers. What are the economic tradeoffs for maintaining these as separate roles versus having them consolidate? Are there new roles that would be beneficial to introduce? What are the optimal mechanisms to mediate how these different parties interact? Can the mechanisms mediating how the MEV supply chain functions be purely economic or are there components that require cryptographic solutions/trusted components?

The notion of what “optimal” means is intentionally left vague. Argue for what metrics are the most important when evaluating different mechanisms. Do we require strict collusion resistance between any groups of agents throughout the supply chain? Do we only require collusion resistance between agents at the same level of the supply chain? Is it enough that the mechanism’s properties hold in equilibrium or is it important that all parties have dominant strategies? On the other hand, what are lower bounds for how “optimal” the transaction supply chain can be? Are there certain conditions under which it is impossible to achieve all the “optimal” properties we might want?

This problem is left open to interpretation. Feel free to address any of the questions above or provide your own direction towards designing mechanisms for the transaction supply chain.

Relevant Reading:

Credible Auctions: A Trilemma 

Foundations of Transaction Fee Mechanism Design

Transaction Fee Mechanism Design with Active Block Producers

The Centralizing Effects of Private Orderflow on PBS 

Contingent Fees in Orderflow Auctions

Time is Money: Strategic Timing Games in Proof-of-Stake Protocols

The Specter (and Spectra) of Miner Extractable Value

Why Enshrine Proposer Builder Separation

The Future of MEV is SUAVE

Infinite Games

Leveraging Blockchain For Deepfake Protection

Problem Statement: The rise of deepfakes (synthetic videos, photos, or audio recordings produced by artificial intelligence) that can convincingly replace a person’s likeness and voice – leading to potential misuse in misinformation campaigns, fraud, and other malicious activities – has been a common topic of conversation recently. While various deepfake detection methods are being researched, the challenge remains in providing a verifiable and trustless way to ensure the authenticity of digital content.

Blockchains and smart contracts present a promising avenue to counter this issue. By leveraging the immutable nature of blockchains and the automated execution of smart contracts, it’s possible to create a system that verifies and validates genuine content and differentiates it from tampered or deepfaked versions.

Your task is to devise a system that can enable viewers or platforms to verify the authenticity of videos, voice recordings, or photos. This may include reputation systems (reward or penalize based on the validation result, e.g., rewarding creators for genuine content or flagging tampered content) or it may not. Consider the scalability, privacy, and efficiency of your proposed system, especially when large video files are involved. Your solution should minimize computational and storage overheads and should be feasible for widespread adoption. 

Key challenges include addressing the re-recording attack vector (if someone records a screen displaying a video, this secondary recording might bypass some naive authenticity checks)  as well as allowing for legitimate changes (cropping, shortening videos).

Relevant Reading:

Deep Fake Generation and Detection: Issues, Challenges, and Solutions

Combating Deepfake Videos Using Blockchain and Smart Contracts

How Blockchain Can Help Combat Disinformation

Why Decentralized CMS is the Future of Content Management for Web 3.0 and Beyond

Combating Deepfakes: Multi-LSTM and Blockchain as Proof of Authenticity for Digital Media

Geometrically robust video hashing based on ST-PCT for video copy detection

Solving the Deepfake Problem: Proving the Authenticity of Digital Artifacts with Blockchain

Acknowledgements: Thank you to Pranav Garimidi, Liz Harkavy, Michele Korver, Sam Ragsdale, and Tim Roughgarden for contributing challenge questions and to Jason Rosenthal, Daren Matsuoka, and Mike Manning for their help in making this a reality. 

***

The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the current or enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.

This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investment-list/.

Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures/ for additional important information.