Agency by design: Preserving user control in a post-interface world

For decades, new layers of abstraction have defined how users interact with technology. Programming languages abstracted away machine code. The command line gave way to the graphical user interface, then to the mobile app and the API. Each shift abstracted more complexity away from the user — but left the human firmly in the loop.

Agents represent a different level of abstraction: They don’t just abstract how tasks are done, they abstract who does them. In an agentic future, users will specify outcomes, not actions, and systems will determine how to achieve them. In crypto and fintech, agents can handle tasks like rebalancing portfolios autonomously, automating multi-step wallet transactions, and routing API calls across protocols  — all triggered by a single natural language instruction. It’s the first major leap where the interface between human intent and machine execution becomes probabilistic rather than deterministic. 

In other words, outcomes will start to depend on the agent’s interpretation of user intent, not a predefined sequence of commands.  At the core, what changes is the user’s role: The user is no longer an operator, they’re an orchestrator. They set initial parameters, then step back as the system runs itself. The user’s role becomes supervisory, not interactive; unless they intervene, the default state is “on.” Agents take action on their behalf, often without additional prompts or approvals.

When I was previously building a privacy-first AI company, I started thinking deeply about this shift — and now, in my go-to-market team role, I’m seeing founders across the ecosystem start to wrestle with the same questions, especially as agents become more prevalent in crypto systems. Crypto has always been about minimizing blind trust. Agentic systems don’t break that principle — they just raise the bar for how seriously we design for it. 

This piece surveys what’s being built — a snapshot of today’s infrastructure. It also addresses why user control is so important to preserve at this layer of the stack, and the role crypto can play. I’ll also address how to design with intention at the agent layer, using emerging patterns to maintain agency, transparency, and trust. Better understanding how these approaches are converging is the first step in anticipating how they’ll reshape markets, user behavior, and business models — or even how they’ll impact your product.

What it means for founders

For founders, product teams, and protocol designers, the rise of agentic systems is an opportunity to reimagine what user sovereignty looks like in a post-interface world. Sovereignty isn’t a UI feature — it’s a systems question. As execution shifts from user clicks to autonomous agent actions, design decisions at the agent layer will determine whether users stay in control or simply fade into the background. That means asking things like: 

  • Are agent defaults safe, reversible, and user-friendly? 
  • Can users verify what’s been done on their behalf? 
  • Are permissions composable, revocable, and enforced at the smart contract level?
  • Can users simulate outcomes before approving?
  • Are delegation and execution models interoperable across agents, protocols, and wallets? 
  • Do they incorporate shared standards or are they locked into custom, siloed formats?

We’ll dig into the context behind these questions below, but in short, they all point to the same question: How do we preserve trust and control when users no longer sit at the center of execution?

Defining user agency 

User agency means being able to set boundaries and verify what’s done on your behalf — even if you’re not the one clicking “sign.” It’s the functional expression of user sovereignty in a world where agents, not users, will increasingly drive execution.

Crypto has the tools to preserve user agency in the shift to agents, but only if we rethink delegation, execution, and privacy at the systems level.

To level-set, today’s “onchain agents” are largely offchain programs that connect to blockchains via wallets, smart contracts, or account abstraction layers. They use large language models (e.g., GPT 4 or Claude) to plan and make decisions, then interact with blockchain infrastructure to execute them. 

This offchain synthesis–onchain execution model enables powerful automation, but introduces new risks to user visibility and control. Over time, agents may evolve to operate more natively onchain, embedding more of that synthesis directly into decentralized systems. As this shift unfolds, the risks we face today will only deepen, and user agency will grow increasingly blurry.

Two pillars of crypto’s original promise are self-custody and transparency. But both rest on the assumption that the user is the one actively clicking “sign.” In an agent-driven paradigm, that model can start to break down. Agents can act before the user is even aware, and a single signature might authorize a series of downstream actions — some of which the user never explicitly reviewed.

For example, a wallet signature intended to approve one action might be reused by a delegated agent to execute others under broad permissions. A DAO governance delegate could cast votes based on a misinterpreted definition of “sustainability” embedded in its training dataset. Or you might delegate an agent to manage staking rewards — only to find your funds rerouted to an obscure yield vault you’ve never heard of. You didn’t sign that transaction — but you technically authorized it.

These aren’t edge cases. They’re realistic outcomes given how today’s agent systems are designed. A recent real-world example illustrates this risk: In March 2025, attackers who gained access to its control dashboard commandeered the AI trading bot AiXBT. The breach allowed them to queue deceptive commands that triggered the agent to transfer 55 ETH to an attacker’s address. The agent’s logic remained intact, but the compromise of its interface bypassed user intent entirely. 

As users delegate more to agents, new risks emerge:

  • Ambiguous inputs: Vague prompts or poorly scoped permissions can cause agents to operate on flawed assumptions.
  • Silent failures: Agents can fail without feedback loops, leaving users unaware of what went wrong. Or if anything went wrong at all.
  • Cascading effects: A single approval might trigger a multi-step workflow the user didn’t fully intend or anticipate.

In these situations, agency becomes something users think they have when they don’t.

And in some cases, the erosion of agency starts even earlier. Agents may come prepackaged with opaque default presets — permissions, connections, or logic the user never explicitly configured. When that happens, agency can begin to erode not just at the point of execution but at the point of distribution.

Reclaiming user agency

If interfaces disappear, preserving user control will require new approaches that evolve alongside crypto’s core values. 

We’re still early, but across the ecosystem, teams are exploring new ways to delegate safely, maintain transparency, and protect data sovereignty, even as execution becomes more autonomous. Emerging patterns include:

  • Agent permissioning frameworks
  • Intent-based coordination 
  • Privacy-preserving execution
  • Authenticated delegation standards 
  • Zero-knowledge agent frameworks

Together, they offer a glimpse of how control and composability might coexist in an agentic future.

This list isn’t meant to be exhaustive — agent-layer technologies are evolving quickly, and new models continue to emerge. Instead, it offers a curated snapshot of patterns that show early promise in preserving user agency. Some of the systems and standards that follow were introduced a few years ago, while most are more recent — the goal is simply to illustrate the body of work this next wave is building on.

Agent-permissioning frameworks

One of the clearest ways to preserve sovereignty is through scoped delegation. This entails giving users fine-grained control over what agents can do, under what conditions, and with which assets. 

Several teams are working on permissioning frameworks that embed constraint and accountability directly into the agent’s execution layer:

  • MetaMask’s Delegation Toolkit (launched mid-2024; developer use now) provides infrastructure for account abstraction-based smart accounts, enabling developers to build wallets with scoped, recoverable permissions; gasless transactions; and zero-prompt user flows. In 2025, MetaMask expanded the toolkit with multichain smart account support and policy-based permissions, deepening its role as a standard for agent delegation
  • Coinbase’s AgentKit combines Multi-Party Computation (MPC)-based key control, session-limited delegation, and account abstraction to support secure onchain agent actions. This embeds sovereignty directly into agent execution. Its successor, x402, launched in October 2025, extends these ideas into a full developer platform for agentic applications. (More on this in the closing section.)
  • EIP-7702 extends these ideas at Ethereum’s base layer. Part of the Pectra upgrade, EIP-7702 lets externally owned accounts (EOAs) temporarily behave like smart contract wallets. In other words, an EOA — a standard Ethereum account controlled by a private key — can temporarily act more like a programmable smart contract. This unlocks native support for session keys and spending limits, strengthening agent permissioning without relying on custom wallet logic.
  • Biconomy’s Delegated Authorization Network (DAN) introduces a programmable delegation layer for AI agents. Built on EigenLayer’s Actively Validated Services (AVS), it manages keys and enforces user-defined constraints through shared signing, where multiple parties must approve each action. (More on how AVS works in the section on verifiable execution.)
  • Lit Protocol’s Vincent Tool SDK (v3 released Dec 2023, v4 released Mar 2024) lets developers define the actions agents can take and policies that govern them, including spending caps, conditional logic, and multi-party consent triggers.
  • Autonomys Network (formerly Autonomy Network) lets users define programmable “rails” for agent behavior, including gas caps, asset types, and destination allowlists and ensuring that actions stay within user-set boundaries. 
  • Avocado by Instadapp enables strategy delegation within user-defined scopes, such as “rebalance weekly” or “only interact with whitelisted protocols,” using smart contract wallets with modular authentication layers.
  • Fireblocks, Dynamic.xyz, and others apply MPC to key management as well. They split signing authority across services to constrain agent actions. By preventing any single agent or service from holding full custody of user keys, MPC enforces policies directly at the signing layer.

Collectively, these frameworks show how giving users the ability to set clear boundaries up front helps protect their agency by design.

Intent-centric infrastructure

As UX moves further away from direct user interaction, some teams are rethinking the transaction model entirely. One emerging approach is to shift from instruction-based systems (where the user approves every step) to intent-based architectures, where users (or agents) specify outcomes and the system determines how best to achieve them. 

  • NEAR’s Intents framework lets users define a desired outcome (e.g., “bridge tokens and stake”) without specifying how to do it. A decentralized network of solvers competes to fulfill that request efficiently. These solvers handle routing, execution, and optimization behind the scenes. This is live and gaining traction. According to DeFiLama, NEAR Intents have handled over $7 billion in cumulative DEX volume since its launch in Q4 2024.
  • Anoma (Mainnet rollout 2025, early deployment phase) introduced a fully intent-based architecture. It uses a gossip protocol for decentralized intent discovery, where users broadcast intents into a shared coordination layer. Counterparties match and fulfill these intents through encrypted execution.
  • Particle Network V2 introduced an “Intent Fusion Protocol” built into a zkWaaS (zero-knowledge wallet-as-a-service) stack. Users express goals via natural language or interface triggers, which solvers interpret and fulfill across chains.

In intent-centric systems, users define what they want, not how to get it. This abstraction empowers agents to determine the best path forward, often without a visible interface. While this can be powerful, it may also involve clearer user delegation, since the agent (or solver) becomes responsible for interpreting intent and executing on the user’s behalf. 

Authenticated delegation standards

As agents gain more autonomy, it becomes unclear who is authorizing what — and when. Authenticated delegation frameworks make this possible by allowing users to grant scoped, verifiable permissions to agents, with auditability built in by design.

  • WalletConnect Smart Sessions let users authorize agents to perform scoped actions — like executing trades or claiming rewards — within a defined session window. Users set parameters once, and agents act autonomously within that timeframe, reducing repetitive confirmations without losing user-defined limits.
  • MIT’s Authenticated Delegation Framework extends OAuth 2.0 with agent-specific credentials. It creates cryptographically verifiable chains of authority that link a human user, a software agent, and the permissions granted, enabling granular, revocable, and auditable delegation across platforms.
  • MIT’s Non-Authoritative Non-Repudiable Delegation Architecture (NANDA) explores how to create cryptographic chains of delegated authority without granting full control to any single agent. By enabling agents to act with provable, constrained intent, without assuming root privileges, models like NANDA could become a foundation for scalable, verifiable delegation in agent-driven systems.
  • Model Context Protocol (MCP) proposes embedding agent metadata (e.g., intent, identity, delegation scope) directly into transaction payloads. Though still evolving, MCP offers a potential standard for preserving context and attribution across execution layers.

These approaches don’t just improve UX They embed user-centric features like consent, attribution, and revocation directly into open systems.

Verifiable execution layers

As agents begin to act autonomously, visibility into what they’ve done — and whether they did it correctly — becomes more important. Without an auditable trail, user intent could be misrepresented, with agents driving unintended outcomes with opaque accountability. 

Verifiable execution systems introduce cryptographic, and incentive-aligned mechanisms, for validating agent behavior, even when execution happens off-chain. 

  • EigenCloud’s Actively Validated Services (AVS) provide a decentralized trust layer for verifying agent actions. Staked validators monitor and attest to the correctness of offchain execution, and can be slashed for dishonesty or inaction. This creates a cryptoeconomic accountability layer that extends beyond the base protocol.
  • AVA Protocol (launched July 2024), built as an AVS on EigenLayer, focuses on verifiable AI agents. It enables models to log task execution on-chain, with slashing tied to execution quality and correctness. 
  • Zero-knowledge (ZK) proofs can offer a powerful tool to verify that a computation was performed correctly without revealing its inputs or intermediate state. Aleo, zkSync, and others are exploring this technique to make offchain agent behavior auditable without sacrificing input privacy.

These approaches help turn agent behavior from a black box into a more transparent, auditable record. Giving users a clear view of what’s been done on their behalf is a helpful step toward preserving their agency.

Zero-knowledge agent frameworks

While ZK proofs can help verify agent behavior, some companies go further, building full agent frameworks that operate natively in zero-knowledge environments. When agents handle sensitive inputs or must coordinate confidentially, transparency alone isn’t enough. Privacy needs to be designed in from the start. ZK technologies allow agents to prove what they’ve done — without revealing why or how.

  • Aleo’s ZK Agent Stack combines a custom zero-knowledge virtual machine with a privacy-preserving programming language (Leo), allowing agents to perform computations privately while still allowing for public verifiability.
  • Seismic’s Encrypted Blockchain Architecture leverages secure hardware and encrypted state transitions to enable confidential agent workflows, with metadata revealed only to authorized parties.
  • ZK-based wallet integrations, including those developed by Particle Network and Lit Protocol, let agents interact with onchain applications without exposing session keys, user identities, or granular wallet histories.

Privacy must also protects users from passive surveillance that can occur from the agents acting on their behalf. These ZK-native systems let users retain sovereignty over sensitive inputs, without giving up automation or functionality.

Multi-agent coordination systems

Agents won’t operate in isolation. As agentic systems proliferate, coordination between them becomes inevitable and more sophisticated. We’ll need protocols that support agent-to-agent trust, governance, and composability. 

Multi-agent coordination systems offer ways for  agents to discover, communicate with, and respond to one another.

At the protocol layer, Google’s Agent2Agent (A2A) Protocol defines a standardized communication layer. It includes primitives for agent discovery, task lifecycle management, streaming updates, and secure messaging. This enables reliable, interoperable agent-to-agent collaboration while preserving context, intent, and auditability across systems. 

Recent proposals like ERC-8004 (Trustless Agents) extend A2A with crypto-native trust models. These include things like reputation scores, stake-secured validation, and trusted execution environment (TEE) attestations. With these kinds of approaches, crypto can complement existing coordination standards with tools that preserve accountability and verifiability across agent ecosystems.

At the marketplace and infrastructure layer, Fetch.AI’s Agentverse is building a decentralized agent marketplace where DAOs can deploy agents to manage voting, liquidity, metagovernance, and resource allocation. Similarly, Bittensor supports a decentralized AI network where models (as agents) train across specialized subnets, coordinate through tokenized feedback loops, and compete for performance-based rewards.

In applied coordination systems, ElizaOS shows how investment DAOs can delegate capital management to autonomous agents. Their agents propose trades, rebalance portfolios, and execute on strategies within boundaries set by tokenholder governance. SingularityDAO offers a hybrid model, blending human and agent decision-making in DeFi portfolio management. AI agents handle execution around tactical portfolio management while human governance sets strategic intent.

These systems point to a future where agent-to-agent interaction becomes the norm — and where shared approaches ensure that agent-to-agent coordination doesn’t erode user intent, control, or autonomy.   

The risk of fragmentation

While each of these approaches shows promise, fragmentation will grow. That fragmentation will also span the entire agent stack, not just agent-to-agent coordination protocols. Teams are therefore designing custom formats for intents, delegation, and permissioning. These components may be powerful in isolation, but without shared design frameworks, agent systems risk becoming siloed — limiting interoperability and undermining user sovereignty across platforms.

For instance, an agent built on MetaMask’s Delegation Toolkit might not understand an intent broadcast by another agent built on a bespoke MPC wallet. Examples like this can create a patchwork of walled gardens, where agents speak only their own languages and dialects. This is precisely what open blockchain networks were designed to avoid. 

Work is already underway to address this, at least on Ethereum. EIP-8001 (Secure Intents) proposes a cryptographic standard for intent schemas. If widely adopted, it could give agents a shared language for delegation and coordination, preserving interoperability and user control as agent ecosystems expand.

***

The shift is unfolding across real systems. Platforms like Coinbase x402 point toward what this future might look like: scoped, verifiable, and composable autonomy built directly into production systems. x402 gives agent policy-signing authority, MPC-secured key management, and verifiable onchain execution. Through Coinbase Bazaar, agents can autonomously discover and pay for services while every transaction remains visible and revocable by the user. These components translate abstract principles into live developer primitives. (For a live view of agent and service activity see x402scan.)

Recent launches in crypto-adjacent ecosystems show how fast agent-driven execution is evolving: In AI-native ecosystems, OpenAI’s Operator Agent can already browse websites, fill out forms, and make purchases via a virtual browser — autonomously and with minimal user prompts. In payments, Visa’s Intelligent Commerce and Mastercard’s Agent Pay are issuing tokenized, agent-bound credentials for AI systems. Similar to some of the approaches above, these systems include scoped spending limits, programmable authentication flows, and audit trails.

These aren’t crypto-native systems, but they show that agent-first execution is already underway. 

***

For those building in crypto, there’s a real opportunity to lead the charge. Unlike legacy technologies, crypto has the primitives to embed user intent, delegation, and verification directly into architecture. Paired with shared frameworks for interoperability, these systems can support agent ecosystems that are composable, accountable to users, and user-controlled by default. 

We’re early. But the choices builders make now will define how control, privacy, and consent are preserved in the agentic era. Crypto can make sure that agency stays where it belongs: with the user. 

***

The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the current or enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.

The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the current or enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.

You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investment-list/.

The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures/ for additional important information.