Brought to you by @0xfan
The merge will switch Ethereum’s consensus layer from an energy-wasting POW chain model, into a POS model utilizing a hybrid consensus mechanism - LMD-GHOST and Casper FFG, providing safety and liveness simultaneously without introducing additional synchrony assumptions.
After the merge, there will be a tiny improvement in TPS, however, transaction fees will not change significantly for the foreseeable future. The only way to scale for Ethereum still depends on the maturity of rollups and data sharding, not the merge itself.
The re-org on May 25th, 2022 was caused mainly by the soft fork of proposer boost updates and would be fixed once merged.
The merge is tentatively planned to occur in the week of 19th September. However, the exact merge date for Mainnet still needs to be decided on the AllCoreDevs after the Goerli Merge.
ETH will be the first digital native deflationary productive asset with real yields after the merge.
Given Ethereum’s several-month-long queue, liquid staking derivatives provide an effective way to skip this queue and get started on earning rewards. Therefore, we could see staking derivatives like stETH trading at a premium to spot ETH.
EIP-4844 and Danksharding are both essential to the rollup-centric future of Ethereum.
There are two chains running in parallel on the Ethereum Network - the Beacon chain and Mainnet. Once these two chains merge, the Beacon chain will act as the new consensus layer, utilizing the new POS model, while Mainnet will remain the execution layer.
In a nutshell, the new POS model will introduce a hybrid consensus mechanism – LMD-GHOST and Casper FFG. Typically, FLP impossibility shows that even a single faulty process makes it impossible to reach consensus among deterministic asynchronous processes. In other words, any single distributed system is impossible of simultaneously having safety, asynchrony, and liveness unless some additional strong assumptions have been made. However, in this case, LMD-GHOST favors liveness that guarantees the termination of the whole network, while Casper FFG provides safety by adding finality to Ethereum. Therefore, for the first time ever, no synchrony assumption has been introduced to provide liveness and safety for Ethereum simultaneously.
LMD-GHOST will be used as the fork-choice rule, meaning only the block with the most votes can be selected as the head of the current chain. This voting process happens every slot, each lasting around 12 seconds, and is made by the committee that is randomly selected from the active validator group every epoch (32 slots).
Every epoch, validators are divided across slots and then subdivided into committees. All the validators in an epoch would first vote on the same checkpoint to finalize the block (Casper FFG). Then, in each slot, the committee would attest to the block built by the proposer to try and determine the current head of the Beacon chain. Therefore, the contents of the Beacon Chain primarily consist of a registry of validator addresses, the state of each validator, attestations, and links to shards.
Switching from POW to POS will reduce ETH's energy usage by ~99.5%, and block time will reduce from ~13 secs (on average) to a constant of 12 secs, leading to a very tiny decrease in fees. Therefore, transaction fees will not change significantly for the foreseeable future.
By estimation, we find Solana would still be the highest TPS chain, followed by Cosmos (~35 TPS) and Avalanche (~31 TPS). Although the reduced block time would increase the average TPS of Ethereum from 9 to 10 post-merge, Ethereum will still lag behind its competitors. Even though Ethereum will gain finality, the 10x-700x finality gap between Ethereum and other L1s still constrains its scalability.
With the hybrid consensus model introduced into the Beacon chain, it is almost impossible to be re-orged, in theory. However, on May 25th, 2022 an unexpected re-org still occurred in the 121471 epoch.
Source: beaconcha.in
Importantly, this re-org did not indicate the bankruptcy of the hybrid consensus model. The core reason for this issue was that the proposer boost update was only a soft fork, resulting in a situation where some nodes (attestors) were using this proposer boost and some weren’t.
The proposer boost gives an extra boost to votes for the block that arrives on time. Since block 74 didn’t show up in the last slot, and arrived a bit later than block 75, the attestors who had implemented the proposer boost update preferred block 75. However, not all the attestors in the network had updated, and the votes were split between blocks 74 and 75 - block 74 had slightly more votes than block 75.
The core of this re-org laid down to block 76, where the proposer was proposer boost activated, therefore the split still happened, resulting in the sum of votes and the boost on block 75 being slightly larger than the accumulated votes on block 74:
votes_{75} < votes_{74}
votes_{75} + boost_{75} > votes_{74}
The block header at this time was block 76. This situation continued to happen until finally, the accumulated votes on block 74 were greater than on block 81(votes_{75}+votes_{76}+...+votes_{81}+boost_{75}<votes_{74}
)There was no confusion at block 83 due to the impacts of the boost having already been absorbed by that time.
Therefore, this re-org was caused mainly by the soft fork of the proposer boost update, without influencing any process of finality. Since every validator running on the Beacon chain will be forced to update this proposer boost configuration after the merge, this re-org will not happen again.
With only some configuration problems and small bugs, which can be fixed locally without a great change, the recent merge on the Ropsten testnet has shown us great success. As one of the oldest testnets on Ethereum, the Ropsten testnet proves to be a perfect place for devs to test, debug, and evaluate the feasibility of the Merge.
In addition,EIP-5133 decided to delay the difficulty bomb until mid-September 2022, trying to merge before block times noticeably slow down.
The lasted consensus layer call on 14th July agreed on a tentative schedule for the upcoming merge of Goerli and Mainnet:
Goerli client release on the 27th or 28th of July
Goerli Bellatrix on 8th August, and the Goerli Merge on the 11th
If everything goes well after the Goerli Merge, the exact merge date for Mainnet will be decided on the ACD (AllCoreDevs) on 18th August. For now:
Mainnet client release — the week of 22nd August
Mainnet Bellatrix — early September
Mainnet Merge — the week of 19th September
However, it is not an accurate date based on the following reasons:
Rather than a do-or-die target, it is a coordination point to help everyone plan.
It is rather hard to predict the accurate date when the TTD( terminal total difficulty) will be hit.
There are still some arguments on the date of ACD.
There was a proposal for the terminal total difficulty of 10,790,000 for the Goerli merge on 19th August. The Georli merge date would be around August 13th if the difficulty per block was at its lowest in the past 1m blocks.
The merge will make ETH a deflationary productive asset. As the demand/supply analysis shows, ETH monthly issuance will drop from ~388k to ~49k, roughly 10x less than the current rate. With an average of ~157,344 ETH burned monthly, the net monthly supply in the base scenario would be ~-147,411 ETH.
Among all three cases shown below, the ETH sell pressure (supply) would be replaced by the buy pressure (demand) after the merge, with net monthly demand being ~100k, ~1k, and ~215k, in the base, conservative and optimistic cases respectively. However, a net selling scenario would still be of possibility, depending on the percentage of monthly issuance sold.
Around 12,858,450 ETH has been deposited into the Beacon chain as of 6/15/2022. Since there is a churn limit function to constrain the cap of newly activated validators in any given epoch, we estimated the max staked ETH of total supply to be ~38%, if all the validators were to be activated 12 months post-merge.
This equation indicates that if the churn limit = 6, only the first 6 validators in the activation queue would be activated at that current epoch. The higher the number of active validators, the larger the number of validators that would initiate activation every epoch.
We estimate that it will take at least 15 months for Ethereum to reach the same staking rate as Polkadot (~52%), and 18 months to reach the same rate as BSC (~81%). Given Ethereum’s several-month-long queue, liquid staking derivatives provide an effective way to skip this queue and get started on earning rewards today. Therefore, we could see staking derivatives like Lido's stETH trading at a premium to spot ETH, instead of at a discount like today.
Transaction fees are made up of a base fee and tips (priority fee). The base fee is calculated by the network based on the demand for block space - if the block size is less than the target block size, the protocol would decrease the base fee.
Each block has a targeted size of 15 million gas and a max size of 30 million. Block size is flexible and will change according to the demand of the network at any given time.
Source: ethreum.org
The base fee is calculated by its underlying formula and would increase by 12.5% per block if the previous block reaches its target size. Given Ethereum’s implementation of EIP-1559, all base fees would be burned, leaving only the tips for the miners (validators after the merge), therefore forming a massive downside trend in the total supply.
Source: Dune
The average burn rate throughout the past 90 days has been around 88%, however, we assume this rate would be fixed post-merge. Only the total revenue (gas fee * gas) per day would change in accordance with network activity.
We set the total bear case revenue each month as the average fee during the period between Q3/2018 - Q2/2019, whereas for the optimistic case, we used the average fee amount during Q3/2020 - Q2/2021. We set the current revenue to be the average for the last 90 days.
So far, EIP-1559 has burned ~57.6% of the total ETH issuance, post the London Hardfork. In normal circumstances, the burned ETH would reduce the daily supply of the network. While during certain periods (e.g. 1/10/2021 - 1/13/2021), high demand for network transactions drove up both the base fee and tips, leading to a net negative supply of ETH as shown in the graph. With only around 3k ETH burned daily, we strongly anticipate this bearish market sentiment would persist at least until the merge.
Since blocks are not generated at a set rate, and the uncle block rewards are highly network-dependent, it is hard to accurately estimate the daily issuance of ETH under the current POW model. Therefore we use the average daily issuance data from Glassnode (~13,245ETH/day).
After the merge, daily issuance will be based on the functions derived from the Beacon Chain Consensus-specs. Below, we will estimate the approximate issuance post-merge using the following assumptions:
all the validators are active and on-line
no penalty or slashing
no inclusive delay
each validator’s deposit is equal to 32ETH
the constant parameters are fixed e.g. base_reward_factor is 64, base_rewards_per_epoch is 4.
# this function is to evaluate the issuance every year after the merge
def get_issuance (total_at_stake):
# set constant
_base_reward_factor = 64
_base_rewards_per_epoch = 4
_deposit = 32
_epoch_per_year = 82125
total_validator = total_at_stake / deposit
base_rewards = 32e9 * base_reward_factor / math.sqrt(total_at_stake * 1e9) / base_rewards_per_epoch
validator_rewards = 3 * base_rewards + base_rewards
issuance = validator_rewards * epoch_per_year * total_validator / 1e9
return issuance
As shown below, the increments of issuance decline over the total staked rate. If the staked rate is 10%, issuance would be ~0.6 Million every year, almost 10x less than the current issuance rate.
Post merge, there will be three types of rewards for validators: tips, block rewards, and MEV. While miners now are forced to sell a larger portion of rewards to pay for hardware costs, stakers would have less forced selling pressure post-merge, due to their lower maintenance costs. Therefore, we assume the monthly selling pressure in the base case would decrease from 80% to 20% after the merge. Whereas for the bull and bear cases, we slightly adapt the ratio respectively.
Based on data from Flashbots, we estimated the MEV improvements on rewards of validators post-merge. Currently, there are around 0.4 million validators in the Beacon chain, indicating that ~2% of an additional reward could be gained from MEV for each validator. This reward would potentially decrease to ~1.7% if the total validators in the network reach 0.5 Mil.
In the base case, the annual block rewards contribute the most (~4.6%), followed by MEV (2.2%) and then tips (1.41%). In the bear market, fewer tips would be given to validators (~0.15%) in contrast to the bull market ( ~2.87%).
With an approximate 8% annual reward, ETH will become a deflationary asset with a real reward post-merge.
The merge is just one step towards the rollup-centric future of Ethereum, not the end. Proof-of-stake will make it the perfect consensus and settlement layer for rollups, while the 64 data shards will offer massive data availability for rollups. This rollup-centric roadmap could eventually reach 100,000TPS. However, right now, the merge and data sharding is more urgent than the improvement on the systematic robustness e.g. DAS.
The original idea of sharding in Ethereum is a little different from others. For example, with Polkadot, each shard does not only provide data availability, but also functions as the execution layer to provide a separate execution logic. Compared to Ethereum, where the role of execution sharding is unnecessary and has been taken by the rollups themselves.
In Ethereum, there will be 64 shards in total, and each of them will have both an individual proposer and a committee rotating through the validator set. It’s possible for shard blocks to be built by their own validators running on the shard chain that does not need to interact with the validators of the Beacon chain. However, a committee of each slot in the Beacon Chain is still necessary to integrate all shard blocks into a beacon block, by crosslinking each shard block’s head block-by-block.
Drawbacks
That being said, splitting the validators from the Beacon chain weakens the security of each shard. Given that each committee should contain at least 128 validators, 32 slots require at least 262,144 active validators in the network. Compared with the solution that all validators together secure not only the beacon chain but also all shards, this original solution requires a stronger honest majority assumption.
It is also relatively hard to guarantee every shard block is synchronized to the beacon chain unless it weakens the asynchrony assumption. And because the shard block is not pledged to be completed with a single slot, rollup data can not be immediately visible to the management contract and has to wait until it is confirmed.
In a nutshell, the biggest innovation introduced by Danksharding is that instead of splitting the committee into different groups, there is only one committee, one builder, and one proposer that is responsible for building and validating execution blocks (Beacon Chain blocks) and shard blocks together.
Clearly, Danksharding’s idea is based on PBS (proposer/builder separation). The proposer is pseudorandomly selected to build a block, whereas the rest of the active validators are grouped into separate committees, attesting to the blocks in each slot.
PBS splits the role of the proposer up — it creates a new builder role to handle the block, while the proposer now only selects the winning header to be committed by the builder. Then the committee of attestors would vote to confirm the header and the block body.
Source: ethresear.ch
Danksharding involves this idea of separation – what if we have a super-builder that can build a block containing not only the execution block (Beacon chain block), but all 64 shard blocks as well, with one proposer to select the header that is then confirmed by one massive committee containing all the validators in the network. This is what Vitalik has pointed out in his “Endgame” piece — centralized block production, decentralized trustless block validation, and censorship resistance.
This design can introduce a lot of new features:
Increased bribery resistance: Compared with the original sharding mechanism where the committee member in each shard is only 1/2048 of the total validator set, there is only one committee in each slot. Therefore, 1/32 of the validator set would be given to the committee, resulting in higher safety as well as decentralization.
Simplify the sharding process: If each shard chain wants to fight against MEV stealing, then it is necessary to introduce PBS into each shard, resulting in a rather sophisticated design. In Danksharding, only one PBS is needed. No shard committee infrastructure, and no additional builder infrastructure – bringing more feasibility to the data sharding future.
Tight coupling between beacon chain and shard chain: Unlike the separate committee design, where the shard block is not pledged to be confirmed in a slot, Danksharding builds and confirms the shard and execution blocks together in each slot. Therefore there is no need to implement a tracking system of shard block confirmations and brings a possibility for the synchronous calls between the Beacon chain and rollups.
However, with only one super-builder in the network, more power would be gained, making it essential to introduce other censorship resistance methods to solve this.
crList (Censorship Resistance List) put a check on the builder’s censorship power. In a nutshell, it allows proposers to specify a list of transactions that the builder must include. But because the proposer can only see the header when it accepts the bid from the builder, the builder must provide proof of inclusion of all the transactions from the crlist, along with their block body - otherwise, the block won’t be accepted by the committee.
However, solving the censorship problem is not the end. Data shards are used to provide data availability to rollups, therefore it is essential for nodes to make sure data in each shard is available. Data availability sampling(DAS) provides this feature and works by only requiring each node to download some randomly selected chunks of data, without needing to download the full set of data.
By introducing erasure coding, anyone who holds 50% of the erasure-coded data is able to restore the entire block. Therefore if we sample enough times (say 20 times), then the probability of an attacker tricking the nodes is less than 2^-20. This means that with enough sampling, and a minimum amount of active nodes in the network, data availability can always be correctly provided.
Nevertheless, if the block builder constructs a block that is falsely erasure-coded, then it would be impossible for any nodes to recognize the attack.
So far there are two major ways to handle this:
Fraud Proofs: Similar to the Celestia network, there could be some watch nodes in the network that provide a fraud-proof if they find some maliciously built block with an incorrect erasure code. Like the fraud-proof used in optimistic rollups, it requires some nodes to fully download all the data in order to validate, and a synchrony assumption that assumes the fraud-proof would arrive within a finite amount of time.
KZG Commitments: It commits the original data and its extension as a polynomial – a prover can compute a proof based on the polynomial and then be confirmed by the verifier. There is no need to introduce a synchrony assumption or a node that downloads the full data, providing a lower latency than the fraud-proof method.
However, in the 1D KZG commitment scheme, all 64 shards would require k samples in every slot. Say if the sample size is 512B and k = 30, then each block time, the full nodes would need to download around 1MB size data.
With the 2D KZG commitments implemented through Danksharding, it would only require 75 samples in a block to get to the same probabilistic level as in 1D KZG. Therefore only around 37KB size of data should be downloaded each block time, roughly 30x less than the 1D case.
Disadvantages
Increased requirements for builders: Danksharding is similar to the big-block solution, where it requires one super builder to handle the building process. Both, computations of the 2D KZG proofs and the distribution of the rather big block within the builder deadline, (8S) add high requirements.
Overpower is given to builders: So far the most feasible way to resist censorship is to integrate with crList.
Although Danksharding has provided us with a great scaling future, it’ll still take a considerable amount of time (we anticipate at least 2023), and still need a lot of infrastructures to pave the road before actual implementation. EIP-4844 (aka, Proto-danksharding) is a proposal to implement most of the logic and rules that can be used by Danksharding in the future. It is not a sharding implementation since all the validators still have to download and validate the entire set of data.
The main feature introduced by this is a new kind of transaction type — blob-carrying transactions. It’s similar to a normal transaction on Ethereum, except for carrying an extra chunk of data called “Blob”.
Blobs of data, or Blob, are specially designed for rollups as an alternative to the current calldata that is utilized by rollups for data storage. It is cheaper and larger (~125 kB) than calldata, (2-10kB) but cannot be interpreted by EVM execution.
By introducing a new transaction type, it’s rational to re-design the EIP-1559 mechanism to not only consider the gas in each block, but also the blob. This is known as exponential EIP-1559.
The average block size under this design would then be ~1MB (max ~2 MB), compared with rollups now using 2-10KB in calldata, achieving a 100 times capacity (very optimistic), potentially reducing the rollups fee 100x. This provides temporary scaling relief for rollups by allowing them to scale to 2 MB per slot after merge.
However, the enlarged block would potentially worsen the state growth problem, as every 12s there would be ~1 MB of blocks produced, which could easily lead to 2.5 TB per year. As a result, the pruning strategy was introduced, where every month the blob would be pruned to keep disk use manageable.
The work that is already done in EIP-4844 includes:
A new transaction type of the exact same format that will need to exist in “full sharding”
All of the execution-layer logic required for full sharding
All of the execution/consensus cross-verification logic required for full sharding
Layer separation between Beaconblock verification and data availability sampling blobs
Most of the Beaconblock logic required for full sharding
A self-adjusting independent gasprice for blobs (multidimensional EIP 1559 with an exponential pricing rule)
So far, the work of the execution layer has been finished, leaving only the consensus layer part to be finished. On 6/17/2022, proto-danksharding successfully worked out on a post-merge 4844 devnet. Given the EIP-4844 open issues, we anticipate it is most likely deployed in 2023.
In summary, proto-danksharding has the following features:
Forwards Compatibility: It has developed some of the logic and rules that could be used by Danksharding in the future.
Decrease in gas fee: By introducing a blob-carrying transaction, it can extend the block to a max of 2MB, potentially lowering the gas fee 100x more than today.
Shortens the history: The prune strategy is implemented to keep disk use manageable.
EIP-4488 presents a quick-to-implement solution to decrease the gas cost of calldata, and cap the total calldata in a block. It reduces the gas cost of calldata to 3 gas per byte and further limits the block size to ~1MB, plus an extra 300 bytes per transaction. The average cost of calldata would be reduced to 1/5, largely reducing the current high rollup gas fees.
Although EIP-4488 attempts to reduce the rollups’ high gas fees, it’s just a short-term solution that can’t be helpful with the future data sharding upgrade. Compared to EIP-4844, which creates a new transaction type that would lower the rollup transaction fees immediately once rolled out, leaving no further work to be needed in the execution layer for future data sharding.
EIP-4444 should be implemented along with EIP-4488 to handle the state growth problem, where ~3TB of data would be generated every year. Whereas EIP-4844 does not need to wait for the deployment of EIP-4444, for its self-designed state pruning strategy can natively do this. However, EIP-4844 and EIP-4488 are not an either-or choice, we could first introduce EIP-4488 to decrease the rollups gas fees in the short term, and then follow up with EIP-4844 to pave the road for future data-sharding.
So far, both EIP-4488 and EIP-4844 are in draft status, meaning they’re not complete and are not recommended for general use, as they are likely to change. Since almost all efforts are on the merge right now, we strongly anticipate this will have to be done after the merge, until at least in 2023.
Special thanks to 0xbitbear , pedro for their feedback and review.
NOTES AND DISCLAIMERS:
This document and the information contained herein are for educational and informational purposes only and do not constitute, and should not be construed as, an offer to sell, or a solicitation of an offer to buy, any securities or related financial instruments. Responses to any inquiry that may involve the rendering of personalized investment advice or effecting or attempting to effect transactions in securities will not be made absent compliance with applicable laws or regulations (including broker dealer, investment adviser or applicable agent or representative registration requirements), or applicable exemptions or exclusions therefrom.
This document, including the information contained herein may not be copied, reproduced, republished, posted, transmitted, distributed, disseminated or disclosed, in whole or in part, to any other person in any way without the prior written consent of Smarti Labs Management, L.P. (together with its affiliates, “Smrti”). By accepting this document, you agree that you will comply with these restrictions and acknowledge that your compliance is a material inducement to Smrti providing this document to you.
This document contains information and views as of the date indicated and such information and views are subject to change without notice. Smrti has no duty or obligation to update the information contained herein. Further, Smrti makes no representation, and it should not be assumed, that past investment performance is an indication of future results. Moreover, wherever there is the potential for profit there is also the possibility of loss.
*Certain information contained herein concerning economic trends and performance is based on or derived from information provided by independent third-party sources. Smrti believes that such information is accurate and that the sources from which it has been obtained are reliable; however, it cannot guarantee the accuracy of such information and has not independently verified the accuracy or completeness of such information or the assumptions on which such information is based. Moreover, independent third-party sources cited in these materials are not making any representations or warranties regarding any information attributed to them and shall have no liability in connection with the use of such information in these materials. *
©2022 Smarti Labs Management, L.P.