Thanks to Tim Beiko and Caspar Schwarz-Schilling for helpful comments on earlier drafts!
Ethereum is a notoriously adversarial environment. Ethereum has even been compared to a “dark forest” - acknowledging the terrifying game-theoretic concept from the Three body Problem that being visible to other entities in the universe is an unavoidable precursor to being destroyed by them. This reputation mostly comes from weaknesses in the application layer (insecure smart contracts) or the social layer (users being manipulated to give up their private keys or unwittingly sign transactions) and from the existence of bots extracting value from the transaction mempool. However, sophisticated hackers acting either as thieves or saboteurs are also constantly seeking out opportunities to attack Ethereum’s client software. The client software is what turns a computer into an Ethereum node - it is code that defines all the rules for connecting to other nodes, swapping information and agreeing on the state of the Ethereum blockchain. Attacks on the protocol layer are attacks on Ethereum itself.
Soon, Ethereum clients will undergo a major upgrade (“the merge”) that will switch off their protective proof-of-work algorithm and replace it with a proof-of-stake mechanism instead. There are many reasons for this, which have been discussed at length elsewhere. This will be a philosophical change as well as technical one. Securing the network will evolve from requiring that network security is always more expensive than an attacker could plausibly spend to instead adopting a model that is cheap for all network participants except for an attacker. The merge to proof-of-stake brings security, sustainability and scalability benefits, but on the other hand the complexity of the client software will grow and so will the protocol’s potential attack surface. Participating in securing the Ethereum blockchain currently requires running a single piece of software; after “the merge” it will require three (execution client, consensus client, validator).
This article gives an overview of known attack vectors on the Ethereum’s consensus layer and outlines how those attacks can be defended. Some basic knowledge of the Beacon Chain is probably required to get the most value from this article. Good introductory material is available here, here and here. Also, it will be helpful to have a basic understanding of the Beacon Chain’s incentive layer and fork-choice algorithm, LMD-GHOST. These are big topics, but I’ve included a very high level primer in the preamble below.
The Beacon Chain is a proof-of-stake blockchain that is secured using Ethereum’s native cryptocurrency, ether. Node operators that wish to participate in validating blocks and identifying the head of the chain deposit ether into a smart contract on Ethereum. They are then paid in ether to run validator software that checks the validity of new blocks received over the peer-to-peer network and apply the fork-choice algorithm to identify the head of the chain. The node operator is now a “validator”. There are two primary roles for a validator: 1) checking new blocks and “attesting” to them if they are valid, 2) proposing new blocks when selected at random from the total validator pool. If the validator fails to do either of these tasks when asked they miss out on an ether payout. There are also some actions that are very difficult to do accidentally and signify some malicious intent, such as proposing multiple blocks for the same slot or attesting to multiple blocks for the same slot. These are “slashable” behaviors that result in the validator having some amount of ether (up to 0.5 ETH) burned before the validator is removed from the network, which takes 36 days. The slashed validator’s ether slowly drains away across the exit period, but on Day 18 they receive a “correlation penalty” which is larger when more validators are slashed around the same time. The Beacon Chain’s incentive structure therefore pays for honesty and punishes bad actors.
The fork choice algorithm is run by every validator and its role is to identify the head of the blockchain. Under ideal conditions with entirely honest validators and zero network latency, the fork choice algorithm is not really necessary as there will only ever be one block at the head of the chain. However, in reality some clients receive blocks later than others creating multiple views of the head of the chain and there might be some percentage of misbehaving validators that could be proposing or voting for multiple blocks in the same slot. This means there has to be some algorithm for deterministically picking out the true head from multiple options.
To rewind slightly, the Beacon Chain also ossifies the chain at regular intervals so that its blocks can’t be replaced without >⅓ of the total stake being slashed. This is known as “finality”. The process works by considering the first slot in each epoch to be a “checkpoint”. If a checkpoint gathers attestations (votes) from validators holding at least 2/3 of the total staked ether in the deposit contract, then it is referred to as “justified”. Once that checkpoint has another checkpoint justified on top of it, it becomes “finalized”. The fork choice algorithm then only considers blocks in the non-justified portion of the chain. The algorithm that justifies and finalizes the chain is called “Casper FFG”. The fork choice algorithm itself is called LMD-GHOST, standing for “latest message driven greediest heaviest observed subtree” which is a jargon-heavy way of saying the correct chain is the one that has accumulated the most attestations (GHOST) and that if multiple messages are received from the same validator only the last one counts (LMD). Each validator assesses each block using this rule and adds the heaviest one to its canonical chain.
Once per epoch, the validator is required to sign an attestation. This attestation contains two critical pieces of information: an LMD vote and an FFG vote. The LMD vote is the root of the block the validator considers to the the head of the chain. The FFG vote contains the block hash and epoch for the target and source checkpoints, where the source is the most recent justified checkpoint the chain already knows about, and the target is the next checkpoint to be justified.
The Beacon Chain’s consensus algorithm is therefore a combination of LMD-GHOST and Casper FFG which are sometimes referred to singularly as Gasper. With this high level background, we can move on to examine some of the potential ways this system could be attacked.
First of all, individuals that are not actively participating in Ethereum (by running client software) can choose to attack the network by targeting the social layer (Layer 0). These attacks pose a risk to Ethereum despite never actually directly influencing the execution of any of Ethereum’s software. Layer 0 is the foundation upon which Ethereum is built, and as such it represents a potential surface for attacks with consequences that ripple through the rest of the stack. Some examples come to mind:
What makes attacks on the social layer especially dangerous is that in many cases very little capital or technical know-how is required to launch an attack. All that is really required is time and malicious intent - hardly scarce resources. It is also interesting to think about how a Layer 0 attack could be a multiplier on a crypto-economic attack. For example, if censorship or finality reversion were achieved by a malicious majority stakeholder, undermining the social layer might make it more difficult to coordinate a community response out-of-band.
Defending against Layer 0 attacks is probably not straightforward, but some basic principles can be established. One is maintaining an overall high signal to noise ratio for public information about Ethereum, created and propagated by honest members of the community through blogs, discord servers, annotated specs, books, podcasts and Youtube. Ethereum.org is a great example of this, especially because they are rapidly translating their extensive documentation and explainer articles into many languages. Flooding a space with high quality information and memes is an effective defense against misinformation - it is the information gaps that are vulnerable. The Ethereum community is good at this, but continued commitment to creating and disseminating quality information is required for long term Layer 0 security.
Another important fortification against social layer attacks is a clear mission statement and governance protocol. Ethereum has positioned itself as the decentralization and security champion among smart-contract layer 1’s, while also highly valuing scalability and sustainability. Whatever disagreements arise in the Ethereum community, these core principles are minimally compromised. Appraising a narrative against these core principles, and examining them through successive rounds of review in the EIP (Ethereum Improvement Proposal) process, might help the community to distinguish good from bad actors and limits the scope for malicious actors to influence the future direction of Ethereum.
Finally, it is critical that the Ethereum community remains open and welcoming to all participants. A community with gatekeepers, elitism and exclusivity is one especially vulnerable to social attack because it is easy to build “us and them” narratives. On the other hand, an open and inclusive community is one where misinformation is more effectively erased through open-minded discussion. Tribalism and toxic maximalism hurt the community and erode Layer 0 security. Ethereum generally has a very open community that welcomes new participants, but as the community scales this may become increasingly difficult to sustain. Ethereum community members with a vested interest in the security of the network should view their conduct online and in meatspace as a direct contributor to the security of Ethereum’s Layer 0 because as we will discuss later in this article, a strong social layer is the last line of defense against protocol attacks.
Layer 0 attacks might aim to undermine public trust in Ethereum, devalue ether, reduce adoption and make Ethereum vulnerable to being usurped by another competing chain, or to weaken the Ethereum community to make out-of-band coordination more difficult. However, it is not immediately obvious what is to be gained from attacking the Ethereum network itself.
A common misconception is that a successful attack allows an attacker to generate new ether, or drain ether from arbitrary accounts. Neither of these are plausible because all transactions that get added to the blockchain are executed by all the execution clients on the network. They must satisfy basic conditions of validity (e.g. transactions are signed by sender’s private key, sender has sufficient balance, etc) or else they simply revert. There are several outcomes that an attacker might realistically aim for: reorgs, double finality or finality delay.
A “reorg” is a reshuffling of blocks at the head of the chain. In an attack this would aim to ensure certain blocks are either included or excluded even though they would not be in an honest network. This might allow an attacker to “double spend” by, for example, sending their ether to an exchange and cashing it out into fiat money, then reorganizing the Ethereum chain to remove that transaction so they end up with both the ether and its fiat equivalent. Alternatively, a reorg might allow a sophisticated attacker to extract value from other people’s transactions by front-running and back-running (MEV), or reorgs might consistently prevent someone’s or some group’s transactions from being included in the canonical chain, effectively censoring them from the Ethereum network.
The most extreme form of reorg is “finality reversion” which removes or replaces blocks that have previously been finalized. This is only possible if at least ⅓ of the total staked ether is destroyed - this guarantee is known as “economic finality” - more on this later.
Double finality is the unlikely but severe condition where two forks are able to finalize simultaneously, creating a permanent schism in the chain. This is theoretically possible for an attacker willing to risk 34% of the total staked ether. The community would be forced to coordinate off-chain and come to an agreement about which chain to follow. These kinds of social coordination defenses are explored in detail later.
A finality delay attack prevents the network from reaching the necessary conditions for Casper-FFG to finalize sections of the chain. This would be very disruptive to Ethereum’s application layer since many of the apps that run on top of Ethereum rely upon rapid finality to operate. Without having high confidence in the finality of the chain it is hard to trust financial applications built on top of it. The aim of a finality delay attack is likely simply to disrupt Ethereum - to “watch the world burn” - rather than to directly turn a profit, unless they have some strategic short positions.
Anyone can run Ethereum’s client software, even without running a validator. People do this because it provides local copies of the blockchain that can be used to verify data very quickly and enables transactions to be submitted to Ethereum privately without going through a centralized third-party such as Infura or Quicknode. However, a node operator that does not also run a validator cannot participate in block production or validation. This means they really don’t influence the network security at all. The potential for a non-validating node operator to attack the Beacon Chain is negligible unless they also mount an unrelated layer 0 attack.
To add a validator to a consensus client, a user is required to stake 32 ether into the deposit contract. With an active validator, a user begins to actively participate in Ethereum’s network security by proposing and attesting to new blocks. With these added responsibilities come rewards in the form of ether payouts but also new opportunities to act vindictively. The validator now has a voice they can use to influence the future contents of the blockchain - they can do so honestly and grow their stash of ether or they can try to manipulate the process to their own advantage, risking their stake. One way to mount an attack is to accumulate a greater proportion of the total stake and then use it to outvote honest validators. The greater the proportion of the stake controlled by the attacker the greater their voting power, especially at certain economic milestones that we will explore later. However, most attackers will not be able to accumulate sufficient ether to attack in this way, so instead they have to use subtle techniques to manipulate the honest majority into acting a certain way.
Fundamentally, all small-stake attacks on the Beacon Chain are subtle variations on two types of validator misbehavior: under-activity (failing to attest/propose or doing so late) or over-activity (proposing/ attesting too many times in a slot). In their most vanilla forms these actions are easily handled by the fork-choice algorithm and incentive layer, but there are clever ways to game those same algorithms to an attacker’s advantage. Several such techniques have been discovered, mostly carefully coordinating the timing and propagation of their messages to control how different subsets of the total validator set view the state of the blockchain, and therefore how they behave. The next sections will describe some of the ways low-stake attackers could attack the network and how these attacks can be resisted. While these attacks are discussed in the context of small stakes, more colluding validators means more chances for the attacker to propose blocks, a wider distribution of dishonest nodes over the network topology and greater voting power to influence the fork choice algorithm, all of which give better chances of coordinating lots of validators to act in a particular way.
Several papers have explained attacks on the Beacon Chain that achieve reorgs or finality delay with only a small proportion of the total staked ether. These attacks generally rely upon the attacker withholding some information from other validators and then releasing it in some nuanced way and/or at some opportune moment. They usually aim to displace some honest block(s) from the canonical chain. These honest blocks have not yet been created at the time the attack starts. This is known as an ex ante reorg, as opposed to an ex post reorg in which an attacker removes an already-validated block from the canonical chain retrospectively. Ex post reorgs are effectively impossible on PoS Ethereum without controlling 2/3 of the staked ether (about $18 billion at current prices). With 66% of the stake the attacker can cause a tie-break between the honest and dishonest fork which may break in their favor (this is decided by the lexicographical order of the competing block roots). With anything less than 66% of the total stake, the chance of an attacker completing an ex post reorg is very low - even with 65% stake they only have <0.05% chance of success.
On the other hand, the same mechanism that protects extremely well against ex-post reorgs can be gamed by a sophisticated attacker - under very specific and unlikely network conditions - to create ex ante reorgs. For example, this paper shows how an attacking validator can create and attest to a block (B) for a particular slot
n + 1 but refrain from propagating it to other nodes on the network. Instead, they hold on to that attested block until the next slot
n + 2. An honest validator proposes a block (C) for slot
n + 2. Almost simultaneously, the attacker can release their withheld block (B) and their withheld attestations for it, and also attest to B being the head of the chain with their votes for slot
n+2, effectively denying the existence of honest block C. When honest block D is released, the fork choice algorithm sees D building on top of B being heavier than D building on C. The attacker has therefore managed to remove the honest block C in slot
n + 2 from the canonical chain using a 1-block ex ante reorg. An attacker with 34% of the stake has a very good chance of succeeding in this attack because their votes give 68% weight to the attacker’s preferred fork, as opposed to 66% for the honest fork, as explained here. This means they do not need to rely on manipulating honest validators to vote with them. In theory, though, this attack could be attempted with smaller stakes. Neuder et al. (2020) described this attack working with a 30% stake, but it was later shown to be viable with 2% of the total stake and then again for a single validator using balancing techniques we will examine in the next section.
A successful reorg attacker cannot change history, but they can dishonestly alter the future. They did not require a majority of staked ether to do this, although their chance of success increases with their stake. Their reorg could feasibly allow them to double-spend or extract MEV by front-running large transactions. This attack could feasibly be extended out to more than one block, but the likelihood of success decreases as the reorg length increases.
A more sophisticated attack can split the honest validator set into discrete groups that have different views of the head of the chain. This is known as a balancing attack. In this case, the attacker waits for their chance to propose a block, and when it arrives they equivocate and propose two in the same slot. They send one block to one half of the honest validator set and the other block to the other half. The equivocation would be detected by the fork-choice algorithm and the block proposer would be slashed and ejected from the network, but the two blocks would still exist and would have about half the validator set attesting to each fork. For the cost of a single slashed validator, the attacker has managed to split the chain in two. Meanwhile, the remaining malicious validators hold back their attestations. Then, by selectively releasing the attestations favoring one or other fork to just enough validators just as the fork-choice algorithm executes, they are able to tip the network into seeing either fork having the most accumulated attestations. This can continue indefinitely, with the attacking validators maintaining an even split of validators across the two forks. Since neither fork can attract a 2/3 supermajority the Beacon Chain would not finalize. The greater portion of the total stake the attacking validators control, the greater the probability that the attack is possible in any given epoch because the more likely they have a validator selected to propose a block in each slot. Even with just 1% of the total stake, the opportunity to mount a balancing attack would arise on average once every 100 epochs, which is not very long to wait.
A similar attack, also possible with a small percentage of the total stake is a bouncing attack. In this case, votes are again withheld by the attacking validators. This time, instead of releasing the votes to keep an even split between two forks, they use their votes at opportune moments to justify checkpoints that alternate between fork A and fork B. This flip-flopping of justification between two forks prevents there from being pairs of justified source and target checkpoints that can be finalized on either chain, halting finality.
Both bouncing and balancing attacks rely on the attacking validators delaying their attestations until some opportune moment when they can have outsized impact on the network. Therefore, the attacks are only viable under unlikely conditions of network synchronicity as well as the attacker having very fine control over message timing by tightly coordinated colluding validators. Nevertheless, it is still necessary to close this attack vector. To guard against late-arriving messages influencing consensus, the weight of messages received late can be diminished compared to those received promptly. This is known as proposer-weight boosting.
For bouncing attacks, the fix was to update the fork-choice algorithm so that the latest justified checkpoint can only switch to that of an alternative chain during the first 1/3 of the slots in each epoch. This condition prevents the attacker from saving up votes to deploy later - the fork choice algorithm simply stays loyal to the checkpoint it chose in the first 1/3 of the epoch during which time most honest validators would have voted. The other defense against these delayed-voting attacks is to assign a greater weight to votes that arrive promptly compared to votes that arrive late in each slot.
Combined, these measures create a scenario in which an honest block proposer emits their block very rapidly after the start of the slot, then there is a period of ~1/3 of a slot (4 seconds) where that new block might cause the fork-choice algorithm to switch to another chain. After that same deadline, attestations that arrive from slow validators are down-weighted compared to those that arrived earlier. This strongly favors prompt proposers and validators in determining the head of the chain and substantially reduces the likelihood of a successful balancing or bouncing attack. In essence, these defenses protect against attacks based on large network asynchronicity, even in the latter case described above where fine control over message release was not required. To a large extent, then, the risks of these types of attack have been mitigated by modifications to the fork-choice algorithm that favor prompt activity and penalize delays.
It is worth noting, that proposer boosting alone only defends against “cheap reorgs”, i.e. those attempted by attacker with a small stake. In fact, proposer-boosting itself can be gamed by larger stakeholders in yet another ex ante reorg attack. The authors of this post describe how an attacker with 7% of the stake can deploy their votes strategically to trick honest validators to build on their fork, reorging out an honest block. The honest validators that vote for the adversary’s fork do so promptly such that the attacker benefits from the proposer boost. Again, this attack was devised assuming ideal latency conditions that are very unlikely to be met in the wild. The greater the attacker’s stake, the greater the odds of a successful attack. However, the odds are still very long for the attacker, and the greater stake also means more capital at risk and a stronger economic disincentive.
The aforementioned bouncing and balancing attacks relied upon malicious validators having very fine control over when their messages were received by other validators on the network, which have been mitigated effectively by proposer boosting. However, an additional attack has also been described that does not rely on such fine grained control over network latency. In this case, the attacker requires a proposing validator in two subsequent slots (the odds of this happening in any two slots increase the more validators the attacker controls). One of the adversarial block proposers proposes a block in slot
n, then the second adversarial block proposer proposes a conflicting block in slot
n+1, creating a fork. Since neither block proposer equivocated, no slashing occurs. One nuance of the fork choice algorithm is that when forks have equal numbers of attestations the tie break is resolved in favor of the head with the smallest hash. In this example let’s say the tie breaks in favor of fork A. This is knowable by the attacker. The attacker can also estimate the time taken for half the validators on the network to submit their attestations. The withheld votes from slot
n can be released at roughly the point in time when half the validators have voted. These are attestations from slot
n in favor of Fork B. Half the validator set therefore vote for Fork A because they do not have knowledge of the additional attestations on fork B, the other half vote for a heavier Fork B. The adversarial votes withheld in
n+1 can be used to make up any shortfall on Fork B due to inaccuracy in the timing of the release of the withheld attestations.
This balancing attack was described for an idealized version of the fork-choice algorithm that has more predictable attestation timing than the fork-choice algorithm actually implemented in Ethereum’s consensus clients and it would be much harder to execute on the real Beacon Chain. Distributing an attacker’s nodes across the network topology could help the attacker overcome this to some degree because their messages would propagate across the entire network faster than if they originate from one topological position.
A balancing attack specifically targeting the LMD rule was also proposed, which was suggested to be viable in spite of proposer boosting. An attacker sets up two competing chains by equivocating their block proposal and propagating each block to about half the network each, setting up an approximate balance between the forks. Then, the colluding validators equivocate their votes, timing it so that half the network receive their votes for Fork A first and the other half receives their votes for Fork B first. Since the LMD rule discards the second attestation and keeps only the first for each validator, half the network sees votes for A and none for B, the other half sees votes for B and none for A. The authors describe the LMD rule giving the adversary “remarkable power” to mount a balancing attack.
This LMD attack vector was closed by updating the fork choice algorithm so that it discards equivocating validators from the fork choice consideration altogether. Equivocating validators also have their future influence discounted by the fork choice algorithm. This prevents the balancing attack outlined above while also maintaining resilience against avalanche attacks.
Another class of attack, called avalanche attacks, was described in a March 2022 paper. The authors suggest that proposer boosting - the primary defense against balancing and bouncing attacks - does not protect against some variants of avalanche attack. However, the authors also only demonstrated the attack on a highly idealized version of Ethereum’s fork-choice algorithm (they used GHOST without LMD).
To mount an avalanche attack, the attacker needs to control several consecutive block proposers. In each of the block proposal slots, the attacker withholds their block, collecting them up until the honest chain reaches an equal subtree weight with the withheld blocks. Then, the withheld blocks are released so that they equivocate maximally. This means that for, for example, 6 withheld blocks, the first honest block
n competes with adversarial block
n creating a fork, then all 5 remaining adversarial blocks all compete with the honest block at
n+1. This means the fork building off adversarial blocks
n+1 now attracts honest attestations, because the blocks were released at the moment the weight of the truly honest chain equaled the weight of the adversarial chain. This can now be repeated with the withheld blocks that haven’t yet been built on top of, allowing the attacker to prevent the honest validators from following the honest head of the chain until their equivocating blocks are used up. If the attacker has more opportunities to propose blocks while the attack is underway they can use them to extend the attack, such that the more validators collude on the attack, the longer it can persist and the more honest blocks can be displaced from the canonical chain.
The avalanche attack is mitigated by the LMD portion of the LMD-GHOST fork choice algorithm. LMD means “last-message-driven” and it refers to a table kept by each validator containing the latest message received from other validators. That field is only updated if the new message is from a later slot than the one already in the table for a particular validator. In practice, this means that in each slot, the first message received is the one that it accepted and any additional messages are equivocations to be ignored. Put another way, the consensus clients don’t count equivocations - they use the first-arriving message from each validator and equivocations are simply discarded, preventing avalanche attacks.
The same paper that first described the low-cost single block reorg attack also described a finality delay (a.k.a “liveness failure”) attack that relies on the attacker being the block proposer for an epoch-boundary block. This is critical because these epoch boundary blocks become the checkpoints that Casper FFG uses to finalize portions of the chain. The attacker simply withholds their block until enough honest validators use their FFG votes in favor of the previous epoch-boundary block as the current finalization target. Then they release their withheld block. They attest to their block and the remaining honest validators do too creating forks with different target checkpoints. If they timed it just right, they will prevent finality because there will not be a 2/3 supermajority attesting to either fork. The smaller the stake, the more precise the timing needs to be because the attacker controls fewer attestations directly, and the lower the odds of the attacker controlling the validator proposing a given epoch-boundary block.
There is also a class of attack specific to proof-of-stake blockchains that involves a validator that participated in the genesis block maintaining a separate fork of the blockchain alongside the honest one, eventually convincing the honest validator set to switch over to it at some opportune time much later. This type of attack is not possible on the Beacon Chain because of the finality gadget that ensures all validators agree on the state of the honest chain at regular intervals (“checkpoints”). This simple mechanism neutralizes long range attackers because Ethereum clients simply will not reorg finalized blocks. New nodes joining the network do so by finding a trusted recent state hash (a “‘weak subjectivity’ checkpoint”) and using it as a pseudo-genesis block to build on to of. This creates a ‘trust gateway’ for a new node entering the network before it can start to verify information for itself. However, the trust required to gather a checkpoint from a peer or block explorer or elsewhere does not add much to the trust placed implicitly in the client developer teams, hence the subjectivity is “weak”. Because checkpoints are, by definition, shared by all nodes on the network, a dishonest checkpoint is symptomatic of a consensus failure and out-of-band social coordination will have to take over to save the honest validators anyway.
All of this points to the fact that it is very difficult to successfully attack the Beacon Chain with a small stake. The viable attacks that have been described here require an idealized fork-choice algorithm, improbable network conditions, or the attack vectors have already been closed with relatively minor patches to the client software. This, of course, does not rule out the possibility of zero-days existing out in the wild, but it does demonstrate the extremely high bar of technical aptitude, consensus layer knowledge and luck required for a minority-stake attacker to be effective. From an attacker’s perspective their best bet might be to accumulate as much ether as possible and to return armed with a greater proportion of the total stake.
Ethereum’s PoS mechanism picks a single validator from the total validator set to be a block proposer in each slot. This can be computed using a publicly known function and it is possible for an adversary to identify the next block proposer slightly in advance of their block proposal. Then, the attacker can spam the block proposer to prevent them swapping information with their peers. To the rest of the network, it would appear that the block proposer was offline and the slot would simply go empty. This could be a form of censorship against specific validators, preventing them from adding information to the blockchain. The cost to the attacker depends upon the bandwidth of the validator - it is much cheaper to launch a denial-of-service attack on a home staker than a professional with industrial-grade hardware and internet connection, making the hobbyist more vulnerable to censorship. There are some workarounds to this problem but they too favor professional validators over home stakers. for example, running multiple nodes and separating the block building from the network communication can give an additional layer of protection because the node identity and the validator identity are decoupled. The node runner might switch the identities around or recouple them at short notice to avoid denial of service attacks. Longer term, implementing single secret leader elections (SSLE) or non-single secret leader elections provide more robust mitigation against validator censorship because only the block proposer ever knows they have been selected and the selection is not knowable in advance. All validators submit a commitment to a secret into a pool which is repeatedly shuffled. A random commitment is chosen publicly. but only the chosen validator knows that is the one they submitted - this connection is obfuscated away from any other participant. This is not yet implemented, but is an active area of research and development.
Spreading control of the staked ether across more humans is safer than allowing it to concentrate into fewer hands. This is because the more stake one individual controls, the more influence they can have over Ethereum’s consensus. All of the attacks mentioned previously in this article become more likely to succeed when the attacker has more staked ether to vote with, and more validators that might be chosen to propose blocks in each slot. A malicious validator might therefore aim to control as much staked ether as possible.
33% of the staked ether is a benchmark for an attacker because with anything greater than this amount they have the ability to prevent the Beacon Chain from finalizing without having to finely control the actions of the other validators. They can simply all disappear together. This is because for the Beacon Chain to finalize, pairs of checkpoints must be attested by 2/3 of the staked ether. If 1/3 or more of the staked ether is maliciously attesting or failing to attest, then a 2/3 supermajority cannot exist. The defense against this is the Beacon Chain’s inactivity leak. This is an emergency security measure that triggers after the Beacon Chain fails to finalize for four epochs. The inactivity leak identifies those validators that are failing to attest or attesting contrary to the majority. The staked ether owned by these non-attesting validators is gradually bled-away until eventually they collectively represent less than 1/3 of the total so that the chain can finalize again.
The purpose of the inactivity leak is to get the Beacon Chain finalizing again. However, the attacker also loses a portion of their staked ether. Assuming there is no slashable offense (equivocating, proposing multiple blocks…) and the attacking validators are simply failing to attest, their inactivity score is updated which signifies to the rest of the network that this validator is to be penalized in every epoch until their inactivity score returns to zero. The value of the penalty applied in each epoch scales with the length of time the chain has failed to finalize, denominated in epochs, but not only while there is a leak but also for a “refractory period” afterwards. While the inactivity leak is active, the inactive validators scores are increased by 4 in each epoch, while active validators scores decrease by 1. Once the inactivity leak deactivates (and the Beacon Chain is finalizing again) the inactivity scores of all active validators decreases. This takes longer for validators who were inactive for longer because they have a larger inactivity score to deplete. Validators who remain inactive deplete their inactivity score more slowly. For a validator that stays offline for 100 epochs, their inactivity score would reach about 400. The magnitude of the penalty is calculated as:
inactivity_score * validator_balance / (inactivity_score_bias x inactivity_penalty_quotient)
inactivity score bias is the number to increase the validator score by in each epoch and the
inactivity penalty quotient is the square of the time taken to reduce the non-attesting validator’ balance to about 60% of its initial value, set to around 37.5 days. This means the longer the attacker blocks finality by failing to attest, the more of their stake is burned. Upgrading Ethereum shows a graph estimating the decrease in validator balance during and after a short (100 epoch, ~13.5 hour) inactivity leak for a validator who is always offline. After 135 epochs the validators’ balance has decreased from 32 ETH to 31.996 - a loss of 0.004 ETH. For an attacker to take control of 33% of the stake, they would have to run roughly 3,300,000 validators each staking at least 32 ETH. This means that their attack delaying Beacon Chain finality would cost at least
0.004 x 3300000 = 13200 ETH which at current prices equates to about $39,600,000. Almost $40 million USD to delay finality for half a day, with minimal long term consequences on the Beacon Chain itself. Of course, more persistent inactivity leaks are more expensive - in fact the magnitude of the penalty increases quadratically until the Beacon Chain starts finalizing again - the longer the inactivity leak persists the faster the penalty accumulates! The precise costs of a finality-delaying attack by a validator or colluding group of validators depends on their initial balances, the time they remain offline and the time taken to regain finality. However, the bottom line is that persistent inactivity across validators representing 33% of the total staked ether is extremely expensive even though the validators have not been slashed.
Assuming that the Ethereum network is asynchronous (i.e. there are delays between messages being sent and received), an attacker controlling 34% of the total stake could cause double finality. This is because the attacker can equivocate when they are chosen to be a block producer, then double vote with all of their validators. This creates a situation where a fork of the blockchain exists, each with 34% of the staked ether voting for it. Each fork only requires 50% of the remaining validators to vote in its favor for both forks to be supported by a supermajority, in which case both chains can finalize (because 34% of attackers validators + half of remaining 66% = 67% on each fork). The competing blocks would each have to be received by about 50% of the honest validators so this attack is viable only when the attacker has some degree of control over the timing of messages propagating over the network so that they can nudge half the honest validators onto each chain. This is also why this attack requires network asynchrony - if all nodes received messages instantaneously they would immediately be aware of both blocks and handle the equivocation by rejecting the earlier-received block. The attacker would necessarily destroy their entire stake (34% of ~10 million ether with today’s validator set) to achieve this double finality because 34% of their validators would be double-voting simultaneously - a slashable offense with the maximum correlation penalty. The defense against this attack is only the very large cost of destroying 34% of the total staked ether. Recovering from this attack would require the Ethereum community to coordinate “out-of-band” and agree to follow one or other of the forks and ignore the other. The complexities associated with this social backstop are discussed later.
At 50% of the staked ether, a mischievous pool of validators could in theory split the chain into two equally sized forks. Similar to the balancing attacks described earlier, an attacker could use just one of their validators to equivocate by proposing two blocks for the same slot. Then, instead of needing to manipulate half the network by carefully transmitting messages, they could simply use their whole 50% stake to vote contrarily to the honest validator set, thereby maintaining two forks and preventing finality. After four epochs the inactivity leak would activate on both forks because each would see half of their validators failing to attest. Each fork would leak away the stake of opposing halves of the validator set, eventually resulting in both chains finalizing with different validators representing a 2/3 supermajority. At this point, the only option is to fall back on a social recovery as described later on. However, it seems highly unlikely that an adversarial group of validators could consistently control precisely 50% of the total stake given a degree of flux in honest validator numbers, network latency etc, but perhaps there is a way, with slightly over 50% of the stake, they could dynamically adjust the portion of their pool voting in each slot to maintain a perfect balance between two forks. While the risk of successful attack undoubtedly increases with the size of the adversarial stake, the attack vector associated with exactly 50% of the stake seems unlikely to be successfully exploited - the huge cost of mounting such an attack combined with the low likelihood of success appears to be a strong disincentive for a rational attacker.
At just over 51% of the total stake, however, the attacker could dominate the fork choice algorithm. In this case, the attacker would be able to attest with the majority vote, giving them sufficient control to do short reorgs without needing to fool honest clients. 51% of the stake does not allow the attacker to change history, but they have the ability to influence the future by applying their majority votes to favorable forks and/or reorging inconvenient non-justified blocks out of the chain. The honest validators would follow suit because their fork choice algorithm would also see the attacker’s favored chain as the heaviest, so the chain could finalize. This enables the attacker to censor certain transactions, do short-range reorgs and extract maximum MEV by reordering blocks in their favor. Like proof-of-work chains, a 51% attack is extremely problematic. The defense against this is the huge cost of a majority stake (currently just under $19 billion USD) which is put at risk by an attacker because the social layer is likely to step in and adopt an honest minority fork, devaluing the attacker’s stake dramatically.
An attacker with 66% or more of the total staked ether can finalize their preferred chain without having to coerce any honest validators. The attacker can simply vote for their preferred fork and then finalize it, simply because they can vote with a dishonest supermajority. As the supermajority stakeholder, the attacker would always control the contents of the finalized blocks, with the power to spend, rewind and spend again, censor certain transactions and reorg the chain at will. By purchasing additional ether to control 66% rather than 51%, the attacker is effectively buying the ability to do ex post reorgs and finality reversions (i.e. change the past as well as control the future). The cost of 66% of the total stake is currently about $25 billion USD. The only real defense here is to fall back to the social layer to coordinate adoption of an alternative fork. We can explore this in more detail in the next section.
What happens when the coded defenses are breached and an attacker becomes able to finalize a dishonest chain?
This scenario can arise in multiple ways - most obviously when the attacker has a supermajority stake and can simply finalize with their own votes, or 51% plus additional attestations from honest validators. With 34% of the stake and some control over message delivery across the network the attacker can finalize two forks. There are also scenarios where a reorg’d chain could be finalized as a consequence of the inactivity leak. If an attacker successfully equivocates and divides the validator set across two forks, the inactivity leak will activate on both. The question then becomes - will the honest or dishonest validators regain finality first? If the honest validators finalize first, the honest chain becomes canonical - the fork choice algorithm in all clients across the network accept the finalized portion of the chain and Ethereum is back in the control of honest players. However, if the dishonest validators manage to finalize the chain, the Ethereum community is in a very difficult situation. The canonical chain includes a dishonest section baked into its history, while honest validators end up being punished for attesting to an alternative (honest) chain. A third, (unlikely) possibility is a permanent network schism where validators on one fork are somehow unaware of their counterparts on the opposing fork. This would create two forks that both finalize independently of one another, each one leaking away the stakes of the opposite set of validators. These two chains could then never be re-united because they would have different finalized checkpoints. A corrupted-but-finalized chain could also result from a bug (rather than an attack) in a majority client. On Ethereum’s execution layer the go-ethereum (Geth) client overwhelmingly dominates, being run by >85% of all nodes. On the consensus layer, Prysm currently dominates - until recently being run by >66% of the total validators (now down to ~50% after a sustained community campaign). It is possible that bugs in majority execution or consensus clients could halt finality or lead to incorrect data being finalized. On the Kiln testnet a bug in Prysm affected block production - this was inconsequential because the nodes had a roughly equal share of four different clients, but the same bug on mainnet would have been experienced by >66% of the clients. There are therefore several (very low probability) routes to a dishonest finalized chain. They all require either an enormous investment in staked ether (which is then put at risk by the attacker) or very sophisticated manipulation of the validator set, which has so far only been shown to be feasible under idealized conditions and have anyway been mitigated by software updates. Nevertheless, these scenarios cannot be ruled out as impossible. In the end, the ultimate fallback is to rely on the social layer - Layer 0 - to resolve the situation.
One of the strengths of Ethereum’s PoS consensus is that there are a range of defensive strategies that the community can employ in the face of an attacker. A minimal response could be to forcibly exit the attackers’ validators from the network without any additional penalty. To re-enter the network the attacker would have to join an activation queue that ensures the validator set grows gradually. For example, adding enough validators to double the amount of staked ether takes about 200 days, effectively buying the honest validators 200 days before the attacker can attempt another 51% attack. However,the community could also decide to penalize the attacker more harshly, by revoking past rewards or burning some portion (up to 100%) of their staked capital.
Whatever the penalty imposed on the attacker, the community also has to decide together whether the dishonest chain, despite being the one favored by the fork choice algorithm coded into the Ethereum clients, is in fact invalid and that the community should build on top of the honest chain instead. Honest validators could collectively agree to build on top of a community-sanctioned fork of the Ethereum blockchain that might, for example, have forked off the canonical chain before the attack started or have the attackers’ validators forcibly removed. Honest validators would be incentivized to build on this chain because they would avoid the penalties applied to them for failing (rightly) to attest to the attacker’s chain. Exchanges, on-ramps and applications built on Ethereum would presumably prefer to be on the honest chain and would follow the honest validators to the honest blockchain. However, this would be an extremely messy governance challenge. Some users and validators would undoubtedly lose out as a result of the switch back to the honest chain, transactions in blocks validated after the attack could potentially be rolled back, disrupting the application layer, and it quite simply undermines the ethics of some users who tend to believe “code is law”. Exchanges and applications will most likely have linked off-chain actions to on-chain transactions that may now be rolled back, starting a cascade of retractions and revisions that would be hard to unpick fairly, especially if the ill-gotten gains have been mixed, deposited into DeFi or other derivatives with secondary effects for honest users. Undoubtedly some users, perhaps even institutional ones, would have already benefited from the dishonest chain either by being shrewd or by serendipity, and might oppose a fork to protect their gains. There have been calls to rehearse the community response to >51% attacks so that a sensible coordinated mitigation could be executed quickly. There is some useful discussion by Vitalik on ethresear.ch here and here and on Twitter here.
Governance is already a complicated topic. Managing a Layer-0 emergency response to a dishonest finalizing chain would undoubtedly be challenging for the Ethereum community, but it has happened - twice - in Ethereum’s history). Nevertheless, there is something fairly satisfying in the final fallback sitting in meatspace. Ultimately, even with this phenomenal stack of technology above us, if the worst were ever to happen real people would have to coordinate their way out of it.
This article has explored some of the ways attackers might attempt to exploit the Beacon Chain after Ethereum’s merge to proof of stake. Reorgs and finality delays were explored for attackers with increasing proportions of the total staked ether. Overall, a richer attacker has more chance of success because their stake translates to voting power they can use to influence the contents of future blocks. At certain threshold amounts of staked ether, the attacker’s power levels up:
33%: delay finality
34%: cause double finality
51%: censorship, control over blockchain future
66%: censorship, control over blockchain future and past
There are also a range of more sophisticated attacks that require small amounts of staked ether but rely upon a very sophisticated attacker having fine control over message timing to sway the honest validator set in their favor.
Overall, despite these potential attack vectors the risk to the Beacon Chain is low, certainly lower than proof-of-work equivalents. This is because of the huge cost of the staked ether put at risk by an attacker aiming to overwhelm honest validators with their voting power. The built-in “carrot and stick” incentive layer protects against most malfeasance, especially for low-stake atackers. More subtle bouncing and balancing attacks are also unlikely to succeed because real network conditions make the fine control of message delivery to specific subsets of validators very difficult to achieve, and client teams have quickly closed the known bouncing, balancing and avalanche attack vectors with simple patches.
34%, 51% or 66% attacks would likely require out-of-band social coordination to resolve. While this would likely be painful for the community, the ability for a community to respond out-of-band is a strong disincentive for an attacker. The Ethereum social layer is the ultimate backstop - a technically successful attack could still be neutered by the community agreeing to adopt an honest fork. There would be a race between the attacker and the Ethereum community - the (currently) $25 billion dollars spent on a 66% attack would probably be obliterated by a successful social coordination attack if it was delivered quickly enough, leaving the attacker with heavy bags of illiquid staked ether on a known dishonest chain ignored by the Ethereum community. The likelihood that this would end up being profitable for the attacker is sufficiently low as to be an effective deterrent. This is why investment in maintaining a cohesive social layer with tightly aligned values is so important.