The cryptoeconomics of slashing

Sreeram Kannan and Soubhik Deb

No mechanism designed for Proof of Stake (PoS) protocols has been as controversial as slashing. Slashing offers a means to economically penalize any particular node in a targeted manner for not taking a protocol-concordant action. It does so by taking away some or all of the validator’s stake — without imposing externalities on other nodes who are behaving according to the protocol. Slashing is unique to proof-of-stake protocols because it requires the ability for the blockchain to enforce the penalty. Such enforcement is clearly infeasible in Proof of Work systems, where it would be analogous to burning the mining hardware used by misbehaving nodes. This ability to apply punitive incentives opens up a new design space in blockchain mechanism design, and therefore merits careful consideration.

Despite its obvious benefit in the form of “karma,” the main objection to slashing has been the risk of nodes getting disproportionately slashed because of an honest mistake such as running outdated software. Consequently, many protocols have avoided incorporating slashing and instead rely on so-called token toxicity – the fact that if a protocol gets successfully attacked, the underlying token would lose value. Many think that stakers would view this toxicity as a threat against compromising the security of the protocol. In our assessment, token toxicity is not potent enough to deter adversarial attacks in some typical scenarios. In fact, the cost incurred by adversaries to attack and corrupt the protocol, referred to as cost-of-corruption, under such scenarios is essentially zero. 

In this article, we show how incorporating slashing into the mechanism design of a PoS protocol substantially increases the cost-of-corruption that any adversary would incur. Slashing guarantees high and measurable cost-of-corruption for both decentralized protocols in the presence of bribing as well as protocols (centralized or decentralized) that don’t satisfy token toxicity assumptions. 

Circumstances that can lead to bribing and absence of token toxicity are ubiquitous. Many of the PoS protocols avoid falling into one of these two categories by having a tight-knit community, which is feasible only when small; by relying on strong leadership that steers them in the right direction, delegating validation to a small set of reputed and legally-regulated node operators; or by relying on the concentration of staking tokens inside a small group. None of these solutions are fully satisfactory for growing a large and decentralized community of validating nodes. And if the PoS protocol does feature a concentration of stake with only a few validators (or, in extreme cases, only one validator), it is desirable to have a means to penalize these large validators, in case they engage in adversarial behavior. 

In the remainder of the article, we

  • present a model for analyzing complex bribing attacks,
  • show that PoS protocols without slashing are vulnerable to bribing attacks, 
  • show that PoS protocols with slashing have quantifiable security against bribing, and
  • discusses some downsides of slashing and suggest mitigations.

Modeling

Before we present the case for slashing, we first need a model under which we will pursue our analysis. Two of the most popular models for analyzing PoS protocols, the Byzantine model and the game-theoretic equilibrium model, fail to capture some of the most devastating real-world attacks – attacks where slashing would act as a powerful deterrent. In this section, we discuss these existing models to understand their shortcomings, and present a third model – what we call the Corruption-Analysis Model – based on separately evaluating the bounds on the minimum cost that has to be incurred and the maximum profit that can be extracted from corrupting the protocol. Despite its ability to model large swathes of attacks, the Corruption-Analysis Model has not yet been used for analyzing many protocols. 

Existing models

In this section, we provide a brief description of Byzantine and game-theoretic equilibrium models and their shortcomings.

Byzantine model 

The Byzantine model stipulates that at most a certain fraction (𝜷) of nodes can deviate from the protocol-prescribed actions and pursue any action of their choice, while the rest of the nodes remain compliant with the protocol. Proving that a particular PoS protocol is resilient against a whole space of Byzantine actions that an adversarial node can take is a non-trivial problem. 

For example, consider longest-chain PoS consensus protocols where liveness is prioritized over safety. Early research on security of longest-chain consensus focused on showing security against only one specific attack – the private double-spend attack, where all Byzantine nodes collude to build an alternative chain in private and then reveal it much later once it is longer than the original chain. The nothing-at-stake phenomenon, though, offers an opportunity to propose a lot of blocks using the same stake and to use independent randomness to increase the probability of constructing a longer private chain. Only much later, extensive research was undertaken to show that certain constructions of longest-chain PoS consensus protocols can be made secure against all attacks for certain values of 𝜷. (For further details, see “Everything is a Race and Nakamoto Always Wins” and “PoSAT: Proof-of-Work Availability and Unpredictability, Without the Work.”) 

A whole class of consensus protocols, Byzantine Fault Tolerant (BFT) protocols, prioritize safety over liveness. They also require assuming a Byzantine model for showing that, for an upper bound on 𝜷, these protocols are deterministically safe against any attack. (For further details, see “HotStuff: BFT Consensus in the Lens of Blockchain”, “STREAMLET”, “Tendermint”.)  

Although helpful, the Byzantine model doesn’t account for any economic incentives. From a behavioral perspective, 𝜷 fraction of these nodes are completely adversarial in nature while (1-𝜷) fraction are fully compliant with the protocol specification. In contrast, a significant fraction of nodes in a PoS protocol may be motivated by economic gains and run modified versions of the protocol that benefit their self interest rather than simply complying with the full protocol specification. As a salient example, consider the case of Ethereum PoS protocol, where most nodes today do not run the default PoS protocol but run the MEV-Boost modification, which results in additional rewards due to participation in a MEV auction market, rather than running the exact protocol specification.  

Game-theoretic equilibrium model

The game-theoretic equilibrium model attempts to address the shortcoming of the Byzantine model by using solution concepts like the Nash equilibrium to study whether a rational node has the economic incentives to follow a given strategy when all other nodes are also following the same strategy. More explicitly, assuming everyone is rational, the model investigates two questions: 

  1. If every other node is following the protocol-prescribed strategy, does it bring the most economic benefit for me to execute upon the same protocol-prescribed strategy? 
  2. If every other node is executing the same protocol-deviating strategy, is it most incentive-compatible for me to still follow the protocol-prescribed strategy?

Ideally, the protocol should be designed such that the answer to both questions is “yes.”

An inherent shortcoming in the game-theoretic equilibrium model is that it excludes the scenario where an exogenous agent might be influencing the behavior of nodes. For example, an external agent can set up a bribe to incentivize rational nodes to act in accordance with its prescribed strategy. Another limitation is that it assumes that each of the nodes has the independent agency to make their own decisions on what strategy to follow based on their ideology or economic incentives. But this doesn’t capture the scenario where a group of nodes collude to form  cartels or when economies of scale encourage the creation  of a centralized entity that essentially controls all staking nodes. 

Separating cost-of-corruption from profit-from-corruption 

Several researchers proposed the Corruption-Analysis Model for analyzing the security of any PoS protocol, although none have used it to perform a deeper analysis. The model starts by asking two questions: (1) What is the minimum cost incurred by any adversary for successfully executing a safety or liveness attack on the protocol? and (2) What is the maximum profit that an adversary can extract from successfully executing a safety or liveness attack on the protocol?

The adversary in question can be 

  • a node that is deviating from the protocol-prescribed strategy unilaterally, 
  • a group of nodes that are actively cooperating with one another to undermine the protocol, or 
  • an external adversary attempting to influence the decisions of many nodes through some external action such as bribing. 

Computing the costs involved requires taking into consideration any cost incurred for bribes, any economic penalty incurred for executing upon a Byzantine strategy, and so on. Similarly, computing profit is all-encompassing, which counts any in-protocol reward obtained by successfully attacking the protocol, any capture of value  from the DApps sitting on top of the PoS protocol, taking positions on protocol-related derivatives in secondary markets and profiteering from resultant volatility from the attack, and so on.

Comparing a lower-bound on the minimum cost for any adversary to mount an attack (cost-of-corruption) against an upper-bound on the maximum profit that an adversary can extract (profit-from-corruption) indicates when it is economically profitable to attack the protocol. (This model has been used for analyzing Augur and Kleros.) That gives us this simple equation: 

profit-from-corruption – cost-of-corruption = total profit

If there is total profit to be made, then there is an incentive for an adversary to mount an attack. In the next section, we’ll consider how slashing can increase the cost-of-corruption, reducing or eliminating the total profit.

(Note that a simple example of an upper bound on profit-from-corruption is the total value of assets secured by the PoS protocol. More sophisticated bounds can be built that take into account circuit-breakers that restrict the asset transfer inside a period of time. A detailed study of methods for lowering and bounding the profit-from-corruption is beyond the scope of the present article.)  

Slashing

Slashing is a way for a PoS protocol to economically penalize a node or a group of nodes for executing a strategy that is provably divergent from the given protocol specification.  Typically, to enact any form of slashing, each node must have previously committed some minimum amount of stake as a collateral. Before we delve into our analysis of slashing, we’ll first look at PoS systems with endogenous tokens that rely on token toxicity as an alternative to slashing.

We concern ourselves primarily with the study of slashing mechanisms for safety violations, rather than for liveness violations. We suggest this restriction for two reasons: (1) safety violations are fully attributable in some BFT-based PoS protocols, but liveness violations are not attributable in any protocol, and (2) safety violations are usually more serious than liveness violations, resulting in loss of user funds rather than users unable to issue transactions. 

What can go wrong without slashing?

Consider a PoS protocol consisting of N rational nodes (with no Byzantine or altruistic nodes). Let’s assume, for simplicity of calculation, that each node has deposited an equal amount of stake. We first explore how token toxicity falls short of guaranteeing significant cost-of-corruption. Let us assume also for uniformity throughout this document that the PoS protocol used is a BFT protocol with ⅓ adversary threshold. 

Token toxicity is insufficient

A common view is that token toxicity safeguards a staked protocol from any attack on its safety. Token toxicity alludes to the fact that if a protocol gets successfully attacked, then the underlying token that is being used for staking in the protocol would lose value, disincentivizing participating nodes from attacking the protocol. Consider the scenario where 1/3rd of the stakers have joined hands. These nodes can cooperate to break the security of the protocol. But the question is whether this can be done with impunity. 

If the total valuation of the token, in which stake has been deposited, strictly depends on the security of the protocol, then any attack on the safety of the protocol can drive down its total valuation to zero. Of course, in practice, it will not be driven down all the way to zero but to some smaller value. But to present the strongest possible case for the power of token toxicity, we will assume here that token toxicity works perfectly.  The cost-of-corruption for any attack on the protocol is the total amount of tokens held by the rational nodes who are attacking the system, who must be willing to lose all that value.

We now analyze the incentives for collusion and bribing in a PoS system with token toxicity without slashing. Suppose that the external adversary sets up the bribe with the following conditions:

  • If a node executes upon the strategy as dictated by the adversary but the attack on the protocol was not successful, then the node gets a reward B1 from the adversary.
  • If a node executes upon the strategy as dictated by the adversary and the attack on the protocol was successful, then the node gets a reward B2 from the adversary. 

We can draw the following payoff matrix for a node who has deposited stake S, and R is the reward from participating in the PoS protocol:

Attack not successful Attack successful
A node not taking up the bribe and not deviating from the protocol S + R 0
A node agreeing to take up the bribe  S + B1 B2

 

Suppose that the adversary sets the bribe payoff such that B1>R and B2>0.In such a case, accepting bribes from the adversary gives higher payoff than any other strategy the node can take irrespective of the strategy other nodes are taking (the dominant strategy). If 1/3rd  of other nodes end up accepting the bribe, they can attack the security of the protocol (this is because we assume we are using a BFT protocol whose adversary threshold is ⅓). Now, even if the present node does not take the bribe, the token would anyway lose its value due to token toxicity (top right cell in the matrix). Therefore, it is incentive-compatible for the node to accept the B2 bribe. If only a small fraction of nodes accept the bribe, the token won’t lose value, but a node can benefit from forgoing the reward R and instead get B1 (left column in the matrix). In case of a successful attack where 1/3rd of nodes have agreed to accept the bribe, the total cost incurred by the adversary in paying out the bribes is at least \(\frac{N}{3}\) × B2. This is the cost-of-corruption. However, the only condition on B2 is that it has to be greater than zero and hence, B2 can be set close to zero which would imply cost-of-corruption is negligible. This attack is known  as “P+ε” attack.

One way of summarizing this effect is that token toxicity is insufficient because the impact of bad actions are socialized: token toxicity depreciates the value of token completely and affects good and bad nodes equally. On the other hand, the benefit of taking the bribe is privatized and limited to only those rational nodes that actually take the bribe. There is no  one-to-one consequence only for those taking the bribe, that is, the system doesn’t not have a working version of “karma.”

Is token toxicity always in effect?

Another myth prevalent in the ecosystem is that every PoS protocol can have some degree of protection via token toxicity. But, in fact, the exogenous incentive of token toxicity can’t be extended to certain classes of protocols where the valuation of the token that is being used as the denomination for staking is not dependent on those protocols operating securely. One such example is a re-staking protocol like EigenLayer, where ETH used by the Ethereum protocol is reused to guarantee economic security of other protocols. Consider that 10% of ETH is restaked using EigenLayer to perform validation of a new sidechain. Even if all the stakers in EigenLayer cooperatively misbehave by attacking the safety of the sidechain, the price of ETH is unlikely to drop. Therefore, token toxicity is non-transferable for restaked services, which would imply a cost-of-corruption of zero. 

How does slashing help?

In this section, we explain how slashing can significantly increase the cost-of-corruption for two cases: 

  1. decentralized protocol under bribing, and 
  2. PoS protocols where token toxicity is non-transferable.

Protection against bribing

Protocols can use slashing to substantially increase the cost-of-corruption for an external adversary who attempts a bribery attack. To better explain this, we consider the example of a BFT-based PoS chain which requires staking in the chain’s native token and at least ⅓ of the total stake has to be corrupted for any successful attack on its safety (in the form of double-signing). Suppose an external adversary is able to bribe at least ⅓ of the total stake to perform double-signing. The evidence of double-signing can be submitted to the canonical fork, which slashes the nodes that accepted the bribe from the adversary and double-signed. Assuming each node staking S tokens and all slashed tokens are burnt, we get the following payoff matrix: 

Attack not successful Attack successful
A node not taking up the bribe  and not deviating from the protocol S + R S
A node agreeing to take up the bribe  B1 B2

 

With slashing, if the node agrees to take up the bribe and the attack is not successful, then its stake S gets slashed in the canonical fork (lower left cell in the matrix), which is in contrast with the previous bribing scenario where there was no slashing. On the other hand, a node would never lose its stake S in the canonical fork even if the attack is successful (top right cell in the matrix).  If it requires ⅓ of the total stake to be corrupted for the attack to be successful, the cost-of-corruption would have to be at least \(\frac{N}{3}\) × S, which is substantially greater than the cost-of-corruption without slashing. 

Protection when token toxicity is non-transferable

In PoS protocols that feature staking with a token whose valuation is not affected by the security of the protocol, token toxicity is non-transferable. In many such systems, this PoS protocol sits on top of another base protocol. The base protocol then shares security with the PoS protocol by deploying dispute resolution mechanisms on the base protocol for resolving disputes and giving the base protocol the agency to slash the nodes’ staked with the PoS protocol in a provable manner. 

For instance, if a Byzantine action in the PoS protocol is attributed to the adversarial node objectively in the base protocol then its stake with the PoS protocol would be slashed in the base protocol. An example of such a PoS protocol is EigenLayer, which features restaking that enables different validation tasks to derive security from the base protocol Ethereum. If a node re-staking in EigenLayer adopts Byzantine strategy in a validation task on EigenLayer where Byzantine action can be attributed objectively, then this node can be proven to be adversarial on Ethereum and its stake will be slashed (no matter how big is the stake). Assuming each node re-stakes  S, all slashed tokens are burned and gets a reward R from participation, we construct a payoff matrix below:

Attack not successful Attack successful
A node not taking up the bribe and not deviating from the protocol S + R S
A node agreeing to take up the bribe  B1 B2

 

Since we are considering a validation task where any Byzantine action is objectively attributable, even if a node behaves honestly but the attack is successful, the node won’t be slashed on Ethereum (top right cell in the matrix). On the other hand, a node agreeing to take the bribe and behaving adversarially would get objectively slashed on Ethereum (bottom row in the matrix).  If it requires ⅓ of the total stake to be corrupted for the attack to be successful, the cost-of-corruption would be at least \(\frac{N}{3}\) × S.

We also consider the extreme case where all stake with the PoS protocol is concentrated in the hands of one node. This is an important scenario as it anticipates the eventual centralization of stake. Given our assumption of no token toxicity on the token being re-staked, if there is no slashing, the centralized node can behave in a Byzantine manner without impunity. But with slashing, this Byzantine centralized node can be punished in the base protocol.   

Slashing for attributable attacks vs slashing for non-attributable attacks

There is an important subtlety between having slashing for attributable attacks and slashing for non-attributable attacks. Consider the case of safety failures in a BFT protocol. Usually, they arise from the Byzantine action of double-signing with the aim to cripple the safety of a blockchain – an example of an attributable attack as we can pinpoint which nodes attacked the safety of the system. On the other hand, the Byzantine action of censoring transactions to cripple the liveness of the blockchain is an example of non-attributable attack. In the former case, slashing can be done algorithmically by supplying the evidence of double-signing to the state machine of the blockchain. 

In contrast, slashing for censoring transactions can’t be done algorithmically because it can’t be proven algorithmically whether a node is actively censoring or not. In this case, a protocol may have to  rely on social consensus to perform slashing. A certain fraction of nodes can perform a hard fork to specify slashing of those nodes that are accused of participating in censoring. Only if a social consensus emerges would this hard fork be considered as the canonical fork.

We have defined cost-of-corruption as the minimum cost to perform a safety attack. However, we need a property of the PoS protocol called accountability, which means that in case that the protocol loses safety, there should be a way to attribute blame to a fraction of nodes (⅓ of the nodes for a BFT protocol). It turns out that analysis of which protocols are accountable is nuanced (see the paper on BFT protocol forensics). Furthermore, it turns out longest chain protocols which are dynamically available (such as PoSAT) cannot be accountable. (See this paper for an exposition of the tradeoff between dynamic availability and accountability, and some ways to resolve such fundamental tradeoffs.)

Pitfalls of slashing and mitigation

As with any technique, slashing comes with its own risks if not implemented carefully:

  • Misconfigured clients / loss of keys. One of the pitfalls of slashing is that innocent nodes might get penalized disproportionately because of non-intentional faults such as misconfigured keys or loss of keys. To address concerns regarding the disproportionate slashing of honest nodes for inadvertent mistakes, protocols can adopt certain slashing curves that penalize leniently when only a small amount of stake behaves inconsistently with the protocol but penalize heavily when more than a threshold fraction of stake is executing upon a strategy that is in conflict with the protocol. Ethereum 2.0 has adopted such an approach. 
  • Credible threat of slashing as a lightweight alternative. Instead of designing algorithmic slashing, if a PoS protocol did not implement algorithmic slashing, it could instead rely on the threat of social slashing, that is, in the case there is a safety failure, the nodes will agree to point to a hard fork of the chain where the misbehaving staked nodes lose their funds. This does require significant social coordination compared to algorithmic slashing but as long as the threat of social slashing is credible, the game theoretic analysis presented above continues to hold for protocols which do not have algorithmic slashing but instead rely on committed social slashing. 
  • Social slashing for liveness faults is fragile. Social slashing is necessary for penalizing non-attributable attacks such as  liveness faults like censorship. While social slashing can be theoretically implemented for non-attributable faults, it is difficult for a new joining node to verify whether such social slashing happened for the right reasons (censorship) or because the node has been wrongfully accused. This ambiguity does not exist when using social slashing for attributable faults, even when there is no software implementation of slashing. New-joining nodes can continue to verify that this slashing was legitimate because they can check their double signatures, even if only manually. 

What to do with slashed funds? 

There are two possible ways to deal with slashed funds: burning and insurance.

  • Burning. The straightforward way to deal with the slashed funds is to simply burn them. Assuming the total value of the tokens don’t change due to the attack, the value of each token would increase proportionally and would be more valuable than before. Burning does not identify the harmed parties due to the safety failure and compensate only them, instead indiscriminately benefiting all non-attacking token holders. 
  • Insurance. A more sophisticated mechanism to distribute slashed funds, which has not yet been studied, involves insurance bonds issued against slashing. Clients making transactions on the blockchain may obtain these insurance bonds on the blockchain pre-facto to protect themselves against potential safety attacks, insuring their digital assets. When an attack happens compromising safety, the algorithmic slashing of the stakers results in a fund that can then be distributed to insurers proportional to their bonds. (A full analysis of these insurance bonds is underway.) 

Status of slashing in the ecosystem

To the best of our knowledge,  the benefits of slashing were first explored by Vitalik in this 2014 article. The Cosmos ecosystem built the first functioning implementation of slashing in their BFT consensus protocol, which imposes slashing of validators when they are not participating in proposing blocks or are engaging in double signing for equivocating blocks. 

Ethereum 2.0 has also incorporated slashing in their PoS protocol. A validator in Ethereum 2.0 can get slashed for making equivocating attestations or proposing equivocating blocks. The slashing of misbehaving validators is how Ethereum 2.0 achieves economic finality.  A validator can also get penalized relatively mildly due to missing attestations or if it doesn’t propose blocks when it is supposed to do so. 

***

PoS protocols without slashing can be extremely vulnerable to bribing attacks. We use a new model — the Corruption-Analysis model — to analyze complex bribing attacks, and then use it to illustrate that PoS protocols with slashing have quantifiable security against bribing. While there are pitfalls for incorporating slashing into a PoS protocol, we present some possible ways to mitigate those pitfalls. Our hope is that PoS protocols will use this analysis to evaluate the benefits of slashing in certain scenarios – potentially increasing the safety of the entire ecosystem.

***

Sreeram Kannan is an associate professor at University of Washington, Seattle, where he runs the Blockchain lab and the information theory lab. He was a postdoctoral scholar at University of California, Berkeley, and a visiting postdoc at Stanford University between 2012 and 2014 before which he received his Ph.D. in Electrical and Computer Engineering and M.S. in Mathematics from the University of Illinois Urbana Champaign.

Soubhik Deb is a PhD student at the University of Washington Department of Electrical & Computer Engineering, where he is advised by Sreeram Kannan. His research on blockchains focuses on designing protocols for peer-to-peer and consensus layer to innovate novel features in application layer, with achievable performance guarantees under precise security thresholds. 

***

Editor: Tim Sullivan

***

The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.

This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investments/.

Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.

podcast: web3 with a16z

a show about building the next internet, from a16z crypto

newsletter:
web3 weekly

a newsletter from a16z crypto that's your go-to guide on the next internet