Ethereum's Ultimate Scaling Explanation and Directory

Scalability

Ethereum has been massively successful as an open network and computing platform where software and application developers can collaborate and innovate quickly, easily, and without having to request permission. The Ethereum network has become especially popular for DeFi (decentralized finance), NFTs, DAOs (digital autonomous organizations), and trading. But all of this use of the Ethereum network has led to congestion, high user fees, and surging electricity consumption.

Ethereum is a Layer-1 (L1) blockchain currently in the midst of a 5+ year upgrade to satisfy future global demand while also improving security and decentralization. This is the much anticipated Ethereum 2.0/Eth2 upgrade. However, that terminology is being phased out. Now, (at least for the time being) it’s best to think of Ethereum in two parts: the Ethereum consensus layer and the Ethereum execution layer.

Eth1 → execution layer
Eth2 → consensus layer
Execution layer + consensus layer = Ethereum

Why the change?
Previously, the Ethereum roadmap was planned in sequential stages that lent names like “Eth1”, “Eth1.x” and “Eth2.” However, that exact plan has been altered, making the terms Eth1 and Eth2 no longer relevant.

The old naming scheme suggested two issues—namely that “Eth1 comes first, and Eth2 only after” and that “Eth1 will cease to exist once Eth2 exists.” In reality, post-Merge, the chain(s) and their data will be seamlessly joined together. In regards to Ethereum’s next major upgrade, The Merge (~Q2 2022), the consensus layer (previously Eth2) will be merged with the execution layer (previously Eth1), creating just one Ethereum, again. Instead of referring to the chains as Eth1 or Eth2, the community has shifted to calling them the “consensus” chain and “execution” chain, respectively. The execution chain encompasses all the state (data) associated with the user layer (dApps, account balances, tokens, etc.).

Consensus encompasses the Proof of Stake consensus mechanisms. This “base layer” is entirely focused on consensus and data availability. In a post-Merge environment, both of these layers coexist together.

So, what is this big upgrade? What is “The Merge”?
The Merge is the term used for when Ethereum switches from Proof-of-Work (PoW) to a Proof of Stake (PoS) blockchain. This is slated to occur in Q2 2022 and bring with it many benefits that were not previously possible with PoW.

PoS removes the energy consumption often cited in the mainstream media. While PoW is not inherently a bad thing, it’s inarguable that the world is highly critical of energy consumption and now, with the transition to PoS, Ethereum will have eliminated this one enormous criticism. Estimates from Ethereum core developers hypothesize that Ethereum’s energy use will drop by up to ~99%. Without the need for so much physical mining hardware and infrastructure, Ethereum can become a more energy-efficient, geographically-distributed, and nimble blockchain.

Post-Merge, Ethereum’s consensus layer will get rid of energy-intensive miners and replace them with validators that simply run average software. Rather than using highly complex data crunching to introduce randomness into the mining network (PoW), PoS can randomly select validators, allowing it to operate using far less energy.

Additionally, PoS is a predecessor for sharding, another critical Ethereum protocol change that will separate the chain into many concurrent threads (discussed more below).

Finally, the PoS upgrade will reduce Ethereum’s inflation rate from ~3.5% to near zero. Thanks to the implementation of EIP-1559 and its fee burning mechanism in mid-2021, Ethereum’s net issuance is expedited to become deflationary once the Merge is released. EIP-1559 changed Ethereum’s fee structure where transaction fees now have a base fee and a tip. The base fee is set by the protocol and adjusts every block based on network activity. The base fee no longer goes to miners but is instead burned. The tip is set by the market (can be zero in times of little congestion) and will go to the miners.In ~6 months since being implemented, Ethereum’s burn mechanism has removed ~1.25M ETH or ~$5B!

The combination of the base fee burn coupled with lower validator rewards in PoS plus ETH locked up in staking will result in a net negative issuance and shrinking circulating supply. Researchers estimate the supply equilibrium will eventually be between ~27-50 million ETH.

All of these big changes are being made in an effort to provide increased scalability for the Ethereum chain which, since 2020, has regularly experienced periods of congestion and high network fees. The Merge, although important, is just one step in an enormous transformation for Ethereum. Below is the latest update to the roadmap (as of Q4 2021).

Proof of Stake

In the future Ethereum blockchain (called “Proof of Stake”) which is coming as part of the Ethereum 2.0 upgrade, computers doing the confirmation work are called “validators” rather than “miners.” Anyone is eligible to become a validator after acquiring 32 ether (ETH). A validator puts those ETH at risk (or “stakes” them) as a guarantee of good behavior, as it were. Qualified validators (those that have “staked” their 32 ETH) are then chosen (pseudo) randomly to confirm transactions. Note, it is also possible to stake with less than 32 ETH through third-party pools and service providers which reduces the barrier to participating in the network and earning rewards.

In the staking model, there is no advantage to having more computational or electric power because validators are chosen randomly. Therefore, Proof of Stake eliminates the Proof-of-Work arms race for more electricity and computing power. 

But what compels a Proof of Stake validator to do their job correctly? If a chosen validator erroneously confirms a transaction or colludes with other validators to confirm transactions falsely, their staked ETH will be taken (“slashed”) and their validator reputation tarnished. If a validator confirms transactions correctly (along with other validators, until a consensus threshold is reached), they are then rewarded with more ETH. Good behavior rewarded, bad behavior decisively punished. 

The Beacon Chain, which launched in December 2020, is the center of Ethereum’s new PoS consensus mechanism. As the focal point of the PoS network, it’s responsible for the liveness, veracity, and consensus of the Ethereum network. Future sharded layers (discussed previously) will all connect back to the Beacon Chain, beginning with just four shards and possibly growing to 1,000+ shards. The Beacon Chain will provide the foundation for hundreds of thousands of validators distributed across thousands of nodes globally. It’ll organize validators into committees and apply the consensus rules that dictate the network.

How will all of this play out? If the dramatic drop in energy consumption emerges with Proof of Stake, then Ethereum should be immune from a criticism that will likely continue to be leveled at Bitcoin and its Proof-of-Work system.

Sustainable Scaling and Growth

Blockchains like Bitcoin and Ethereum strive for maximum decentralization and censorship-resistance while remaining totally open and inclusive networks. However, they also want to scale to accommodate billions of users. As they stand right now, their limited capacity to process transactions at the base layer (~7 and ~20 TPS, respectively) are in direct opposition to achieving that goal.

The question is “What is the best method of scaling a blockchain?” Nearly every new “next generation” blockchain since 2016 boasts sky-high transactions per second (TPS) as a selling point. However, the issue that persists is that TPS is not the sole metric in which to compare blockchain scaling. Generally, the truth is that the higher the TPS, the higher the cost (financially and computationally) to run the network. Given this, the question arises: Are these new “next-generation” blockchains actually scaling, or just simply increasing TPS while shrinking the network in other regards?

The primary means by which to accomplish sustainable scaling are minimizing the hardware requirements needed to participate in the network and, also, ensuring the state of the network (data) does not balloon to unsustainable levels.

Network nodes are what enforce the rules of the chain and ensure no one is cheating the system. Therefore, having a robust, geographically-dispersed, and anti-fragile network of nodes is ideal for the decentralization and security of the network. In order to attain this system, the costs to run a node (hardware, bandwidth, energy, and storage) should be as little as possible. This allows the greatest number of people the option to join the network if they so choose. Keeping costs low ensures no one is priced out and your network is not solely controlled by a wealthy, elite few.

The other variable to consider is state growth, i.e. how quickly the blockchain’s computational load grows. Full nodes store the network’s entire history from genesis and must be able to validate the entirety of the network’s state. Blockchains that scale by simply increasing the block space and throughput per unit of time (Binance Smart Chain and EOS), also greatly increase their state growth. Those chains are short-term solutions that lead to long-term unsustainable networks. 

Blockchains like Solana, which are designed for greater TPS via specialized hardware, also run into state growth and centralization issues. To be fair, Solana did introduce some new technological innovations to improve sequencing like Proof of History and a parallel execution environment. However, like the “Ethereum killers” of the 2017 era, this design is not long-term scalable/sustainable. Solana already boasts some of the most expensive and specialized hardware requirements of any top 20 cryptocurrencies, and as Solana transactions and price increase, the hardware costs to run a node, be a validator, and process transactions also increase.

**
**

Hardware requirements

  • Bitcoin¹: 350GB HDD disk space, 5 Mbit/s connection, 1GB RAM, CPU >1 Ghz. Number of nodes: ~10,000
  • Ethereum²: 500GB+ SSD disk space, 25 Mbit/s connection, 4–8GB RAM, CPU 2–4 cores. Number of nodes: ~6,000
  • Solana³: 1.5TB+ SSD disk space, 300 Mbit/s connection, 128GB RAM CPU 12+ cores. Number of nodes: ~1,200

Below is empirical data experienced by cryptocurrency and cybersecurity expert, Jameson Lopp, from a 2020 Bitcoin Node and 2021 Node Sync Tests. The table compares the time it takes to sync a full node of Bitcoin vs. Ethereum vs. Solana on an average consumer-grade PC.

Table 1. Blockchain throughput and node-sync comparison
Table 1. Blockchain throughput and node-sync comparison

In the Ethereum ecosystem, serving as a validator on the Beacon Chain requires staking 32 ETH (~$120,000 in Q4 2021).  While this sounds quite expensive and exclusionary on the surface, relative to other chains top blockchains like Bitcoin, Avalanche, Solana, Binance Smart Chain, Ripple, and others, it removes the economy of scale that exists in PoW chains and with liquid staking services like Lido and RocketPool, users can participate with less than 32 ETH. Additionally, by removing hash power with randomness and a capped gas limit/block size, Ethereum enables any user with average hardware to profitably run an Ethereum validator. 

Ethereum’s state growth situation is also better than most chains (thanks to its lower gas limit) but could become problematic given enough time. As time passes and Ethereum adoption increases, the state grows in size and complexity. This ultimately increases the total time it takes for a full node to sync and the hardware requirements needed to run one. Fortunately, Ethereum has been designed to scale with rollups (discussed previously) which help reduce this state growth issue. As discussed at length, rollups handle enormous amounts of computation and transactions off-chain while only submitting a tiny “fingerprint” (proof) to the mainnet. This, coupled with sharding, enables exponential room for growth in a sustainable manner.

Scaling on-chain vs. off-chain

There are two ways a monolithic blockchain (a blockchain that provides its own security, executes its own transactions, and maintains its own data availability) can scale: increase capacity at the base layer (on-chain), or move transactions to a second layer (off-chain).

On-chain scaling techniques are upgrades made to a blockchain’s base layer to improve scalability. Ethereum’s long-term, on-chain scaling solution is called sharding which actually splits the base layer into 64 chains with shared security ensured by the Beacon Chain.

Off-chain scaling refers to scaling solutions that use external execution layers (Layer 2s) rather than the base layer. These Layer 2s, or “L2s” are secondary layers that sit on top of the base layer, inheriting the mainchain’s security while providing more transactional capacity for the blockchain overall.

Ethereum is pursuing both off-chain and on-chain scaling strategies.

“Ethereum 2.0” (remember this term is deprecated!) is the response to these challenges. It’s a massive upgrade to the entire Ethereum ecosystem to accommodate continued growth and an increasing workload, consume less electric power in its verification process, and to be more secure from attacks. In essence, the Ethereum upgrade will make the network more scalable, sustainable, and secure. That is why Ethereum is changing.
Any human enterprise which is highly successful early on will quickly have to address how to do more to keep up with demand. This is a good problem to have, but not an easy one to solve because scaling up often challenges the core values that made the enterprise successful.

Sustainable Scaling

Demand for Ethereum transactions and smart contracts has skyrocketed over the last ~2 years. In 2021 alone, the number of DeFi users increased from ~150k to ~2M but at the same time, gas fees grew 16 times faster. As of Q1 2022, the Ethereum mainnet routinely facilitates the transfer of tens of billions of dollars of value daily, with over $100B currently deposited in DeFi smart contracts. Right now, an Ethereum full node’s size is growing over 50% per year.

There are currently close to 200 million unique Ethereum addresses but if Ethereum is ever to realize its goal of becoming a truly global, decentralized settlement layer, it must find a way to efficiently store 1000x the data. Storing the data more efficiently enables lower compute requirements for nodes. Thus, more people can run nodes and reduce centralization risk.

As of Q1 2022, Ethereum can only process ~25 transactions per second (TPS) due to its design that is optimized for decentralization and security. Without the ability to process more transactions, congestion on the network forces users to pay more to have their transactions executed. This has led to extremely high (>$40) transaction fees for users and is due to the high demand for limited block space on the Ethereum blockchain. Essentially, block space is the commodity that users, creators, and builders consume, making it the pulse of all cryptocurrency networks.

High network fees are a product of how blockchains process transactions. There is a cost associated with a global, decentralized, censorship-resistant financial settlement layer! For a transaction to be executed, all of the nodes across the decentralized network must agree. All nodes on the network keep a full copy of the transactions to validate the transactions on the network.

Ethereum’s ability to process transactions is (partially) constrained by the amount of computing power, bandwidth, and storage on the network. The scalability trilemma is a well-known issue among all blockchains.

The scalability trilemma, illustrated. Credits: Vitalik Buterin
The scalability trilemma, illustrated. Credits: Vitalik Buterin

A blockchain can achieve two of these traits but at the expense of the third. Many alternative layer 1 (L1) chains have chosen to sacrifice decentralization for scalability and security. However, it’s important to remember why decentralization is important. It provides the chain anti-fragility, robustness, reliability, and censorship resistance.

The goal is to increase the number of transactions while retaining sufficient decentralization. What are the decentralization sacrifices (tradeoffs) other smart contract L1s have made? Other chains typically make two sacrifices. They either increase the requirements to run a node so that you have more high-powered machines, which reduces the number of people that may participate in network consensus by pricing them out. Obviously, a network that can only be verified if you have X amount of dollars in computing budget is not an ideal, permissionless system. To use a crude analogy, it would be like making it harder for the average person to vote in an election.

Generally speaking, other blockchain efforts (outside of Ethereum) to increase TPS have focused one/all of the following:

  • Speeding up consensus (allowing nodes to agree on the order of transactions faster),
  • Increasing block sizes (more data per block), and
  • decreasing block times (more blocks per minute)

Implementing one/all of these has generally been the approach for most next generation “Ethereum Killers”: Binance Smart Chain, Avalanche, Solana, etc. And while it has improved TPS by nearly 100x, this still means these chains can (mostly) only achieve TPS into the single digit thousands (<10k). However, that simply will not suffice should these projects reach global adoption. Meaning for these platforms to accommodate growth, they all will have to resort to increasing hardware requirements within their system. The computational “TPS ceiling” within modern day monolithic chains is being realized.

Monolithic refer to a blockchain in which every node performs all parts of the blockchain: execution, consensus, and data availability. Execution refers to the computation of transactions. The execution layer is the user-facing layer where transactions get executed. Consensus refers to ordering transactions and nodes coming to agreement on the state. Data availability guarantees blocks are fully published to the network. The consensus layer plus data availability guarantees all blockchain data is published and accessible for anyone.

The other tradeoff normally conceded is for the network to use fewer nodes to achieve consensus faster and quicker. This makes the chain more vulnerable and centralized. It’s easy to corrupt/destroy 10 nodes all in one location rather than 10,000 all over the globe.

Although often discussed as such, blockchain scalability does not just pertain to TPS. Many L1s, like Binance Smart Chain (BSC), currently boast high TPS numbers but suffer from “chain bloat” and ever-increasing hardware requirements just to keep the chain running. L1s must be able to process more transactions without creating more problems down the road. A node in a technically sustainable blockchain has to do three things:

  • Keep up with the tip of the chain (most recent block) while syncing with other nodes.
  • Be able to sync from genesis in a reasonable time (days as opposed to weeks).
  • Avoid state bloat.

Requirement 1 above is a physical limitation based on computing power (RAM, CPU, etc.) and bandwidth. These are bottlenecks for every node which means there are upper, finite limits to how far you can push the network.

One way for Ethereum to increase its workload could be to increase the size of the computers participating in the Ethereum network (participating computers are called “nodes”). But larger, more expensive, and fewer computers in the network is clearly a form of centralization. Having a small number of bigger players involved in maintaining Ethereum is not Ethereum’s goal.

Fewer computers in the network also creates security issues. A hacker attacking just a few computers, or a single central computer will have an easier time than attacking a huge number of computers all in agreement about the data they are using and creating. Just as with Bitcoin, more computers participating in the Ethereum network enhance the security and permanence of the data on the Ethereum blockchain.

Sharding

​​After the switch to Proof of Stake, sharding is the next significant hard fork upgrade on Ethereum’s roadmap. Just like the Merge, the sharding plan has evolved over time and may continue to change between now and implementation.

With blockchains, there are two main approaches to scaling:

  • Vertically, i.e. making the network’s nodes more powerful
  • Horizontally, i.e. adding more nodes (with no improvements to their performance)

Because Ethereum prioritizes decentralization and security above all else, it must be designed so that everyone has the option and ability to run their own node. This means the first option, vertical scaling—which typically leads to more expensive and onerous hardware requirements—is not an option. Ethereum must keep the requirements to run a node low so that it’s open to nearly everyone.

Horizontal scaling is where sharding fits in. Sharding is the partitioning of a database (or blockchain) into subsections. Rather than building layers atop one another (e.g. L2s or Bitcoin’s Lightning Network), sharding scales out or horizontally without a hierarchy or layered structure. Doing so does not create more burden for the average user.

Original diagram by Hsiao-wei Wang, design by Quantstamp
Original diagram by Hsiao-wei Wang, design by Quantstamp

Shards will be divided among nodes so that every individual node is doing less work. But collectively, all of the necessary work is getting done—and more quickly. More than one node will process each individual data unit, but no single node has to process all of the data anymore.

In Ethereum’s vision of a sharded chain, a (pseudo) randomly-chosen committee of validators are randomly selected and assigned to specific shards. This means they are only responsible for processing and validating transactions in those specific shards, not the entirety of the network.

The randomness of the validator selection process ensures it’s (nearly) impossible for a nefarious actor to successfully attack the network.

Initially, prior to the breakthrough in rollups, Ethereum’s plan was to do sharded computation. However, now with rollups providing the much-needed network scalability, sharding will focus on data availability to provide throughput for the rollups. This is because the bottleneck for rollup scalability is data availability capacity rather than execution capacity. Sharding will significantly increase the on-chain data capacity and help create room for even more and even cheaper rollups.

In a sense, shards will serve as data storage “buckets” for new network data storage demand from rollups. This enables tremendous scalability gains on the rollup execution layer. Just as significant, shards will also help avoid putting overly-onerous demand on full nodes, allowing the network to maintain decentralization.

Sharding will be released in a multi-step process to provide immediate data availability for rollups before releasing the ultimate but more complex vision. A small subset of data shards (four) will initially be released to keep complexity low, i.e. a slow, controlled rollout.

Earlier, we outlined one reason why Ethereum transaction fees were so high was due to all nodes in the network having to process all transactions and reach consensus. Sharding is the answer to the question, “What if each node did not have to process every operation at the same time?” What if, instead, the network was divided into subsections (shards), that operated semi-independently until finally reaching consensus through a central hub (Beacon Chain)?

Shard 1 could process one batch of transactions, while Shard B processes another batch. This would effectively double the transaction throughput of a blockchain, since our limit is now what can be processed by two nodes at the same time. If we can split a blockchain into many different sections, then we can increase the throughput of a blockchain by many multiples.

Ethereum will be split into different shards, each one independently processing transactions. Sharding is often referred to as a Layer 1 scaling solution because it’s implemented at the base-level protocol of Ethereum itself.

Layer 2 Solutions

Source: @yasminekarimi_
Source: @yasminekarimi_

Layer 2 is a broad, catch-all term used to describe scaling solutions built on top of an existing L1 blockchain. The primary advantage of using an L2 solution is the main chain remains untouched and unaffected by what is built atop it. Any issues that happen “up the stack” (e.g. on another layer) will not compromise the base layer—rather, L2s exist as off-chain software that interact with Smart Contract on Ethereum. Because of this, L1 serves as the security and consensus layer that cryptographically secures and stores data transactions on the immutable blockchain ledger.

Source: Coin98
Source: Coin98

L2s can extend the utility of Ethereum outwards, letting users have increased scalability off of the blockchain that can still refer back to the main chain if necessary.
Ethereum, as we know it today, won't scale. Meaning, the Ethereum L1 is designed to remain a highly decentralized, global settlement layer above all else. However, Ethereum's web of L2s will be responsible for scaling Ethereum and serving as its execution layer. These layers will absorb much of the existing value on Ethereum mainnet plus future inflows as Ethereum adoption grows. It's important to understand that Ethereum's web of L2s is a marketplace of independent projects competing with each other to help scale Ethereum.

https://l2beat.com/
https://l2beat.com/

The original Ethereum 2.0 roadmap was the response to these scaling challenges. It's a massive upgrade to the entire Ethereum ecosystem to accommodate continued growth and an increasing workload, to consume less electric power in its verification process, and to be more secure from attacks. That is, the Ethereum upgrade will make the network more scalable, sustainable, and secure. That’s why Ethereum 2.0 is coming.

Layers built “on top” of Ethereum do not always have the same security guarantee as on-chain operations, but they can still be sufficiently secure to be useful—especially if the user is comfortable with the tradeoff for low-value transactions.

Ethereum L2s allows builders to tailor their tooling to their needs, meaning they can decide for themselves where their product sits on the scalability trilemma. Tradeoffs between speed, finality, and transaction cost can be developed, just like in competing alternative L1s. For the most valuable transactions, users can choose the main chain where security and censorship-resistance is highest and for low value transactions, a gaming sidechain may suffice. L2s allow the user to maintain control without compromising the underlying blockchain, preserving decentralization and finality.

Side chains

In the context of Ethereum, sidechains are separate, Ethereum-compatible blockchains. Sidechains can be independent EVM-compatible blockchains, but more likely, they are application-specific blockchains catering to Ethereum users and use cases like Polygon or Ronin.

EVM stands for the Ethereum Virtual Machine and is the global network of computers that keeps Ethereum running. The EVM actually handles processing of every transaction on Ethereum. It is a Turing complete virtual machine that is limited by the amount of gas provided by users.

Sidechains design themselves to be EVM-compatible so they can essentially copy and paste their code to easily interoperate with Ethereum and all of its infrastructure including wallets, block explorers, and more. Projects like Binance Smart Chain, Avalanche, Tron, Celo, Fantom, and more are all examples of competing L1 EVM chains except with their own token.

Users send their L1 funds by way of a cross chain transfer (bridge) enabled by a two-way peg (2WP) protocol that locks the assets on the L1chain, then creates/mints an equal amount on the sidechain. This means that ETH that has been bridged over to a sidechain is simply an IOU and the user no longer has the security guarantees of the Ethereum chain. Side chains have their own security model and, often times, they are far less mature and far less secure than Ethereum’s. If users had funds on a sidechain and the network went down (like Solana), there is nothing a user can do and their funds are stuck until the chain is brought back online. Additionally, if the bridge between two chains is compromised (again, like on Solana), users could stand to lose funds.

However, rollups contain immutable “escape hatches” that always ensure a user can exit back to mainnet even if the rollup network is offline. Users can always manually submit transactions to the mainnet Ethereum rollup contract as you need, including exiting the rollup with your funds.

Some sidechains are purposely built to be complementary to Ethereum and offload some specific Ethereum use cases onto themselves. Because of this, sidechains increase the scalability of Ethereum by serving as external execution layers for L1 Ethereum. However, it's important to remember that sidechains do not provide the same amount of security as L1 Ethereum.

Polygon

Technically, Polygon is its own blockchain (with its own token: MATIC) but was built to become Ethereum’s internet of blockchains. Polygon provides the architecture that enables developers to create custom, application-specific chains that leverage Ethereum’s security similar to the Cosmos hub-and-spoke model. It provides an interoperable layer that can bridge many different projects and scaling solutions such as ZK-rollups, optimistic-rollups, and sidechains (discussed below) which supports the growth and modular expansion of the Ethereum ecosystem.

Since Polygon is a separate chain, it must be secured by a separate Proof of Stake consensus mechanism where validators stake MATIC. However, MATIC is staked in smart contracts on the Ethereum main chain. Polygon connects to Ethereum through a bridge with the use of a lock and mint mechanism. Users deposit funds into the bridge which locks them in a smart contract on Ethereum and mints the equivalent amount on Polygon. Polygon also maintains a secure relationship with the Ethereum main chain through periodic checkpointing, posting state changes to Ethereum, leading the Polygon team to characterize it as a “commit chain.” To withdraw funds, you will have to go back through the bridge.

The bridge (and funds) are secured by a 5/8 multi-sig scheme making it incredibly more centralized than the Ethereum mainchain. This centralization factor should be considered when weighing the cost of transacting on a layer 2.

Additionally, because Polygon is still a relatively new blockchain, it is not as battle-tested as others like Ethereum or Bitcoin. New bugs and issues are not out of the question. Just in December 2021, a whitehat hacker discovered a critical vulnerability in Polygon that left ~9B MATIC at risk. The hacker reported the issue and it has since been fixed with no loss of funds. However, the incident serves as a reminder as to just how irreplaceable the security of Ethereum’s L1 is and the risk involved in these new nascent scaling solutions.

However, as of Q1 2022, Polygon’s Proof of Stake (PoS) sidechain is an industry leader with ~$5 billion in total locked value (TVL) deployed over 100 DeFi and gaming applications.

https://polygon.leslug.com/dashboard.html
https://polygon.leslug.com/dashboard.html

Polygon History and Roadmap

In Q2 2021, Polygon released the Polygon SDK, developer tooling for launching new blockchains as rollups or their own chain, and Avail, a data availability innovation for Polygon chains. They also have a $1 billion fund for ZK-based solutions and research.
In August 2021, Polygon bought and acquired Hermez, zk-SNARK scaling solution, for $250M. The existing Hermez tokens were rolled into MATIC after the acquisition. MATIC will now be used to allow network coordinators to earn the right to process transactions in a permissionless auction system. Currently, Hermez is not EVM-compatible but, according to its roadmap, a V2 is in the works to make it fully EVM-compatible.

In November 2021, Polygon announced another purchase of a scaling project, Polygon Miden, a ZK-rollup implementation using zk-STARKS. Unlinke Hermez, Miden is already EVM-compatible and aims to guarantee “Ethereum-compatibility” by directly compiling Solidity-written smart contracts into Miden VM’s native language. It is led by Bobbin Threadbare, a former core ZK researcher at Facebook. Additionally, in contrast to Starkware’s StarkNet, Miden is fully open-sourced off the bat.

Finally, in December 2021, Polygon proved yet again that the project has big plans in the L2 and rollup space. Polygon made another crypto-acquisition, this time purchasing the ZK-rollup project, Mir Protocol, for $400 million and rebranding it to Polygon Zero. Polygon claims Mir Protocol contains the “fastest” ZK-proof technology, called plonky2, “a recursive SNARK that is 100x faster than existing alternatives and natively compatible with Ethereum.”

Plasma
A Plasma chain is a L2 scaling solution that utilizes fraud proofs like optimistic rollups, yet maintains data availability off-chain (unlike optimistic rollups). Plasma was one of the earliest areas of L2 research but failed to gain much traction, especially as the advantages of rollups became evident.

Plasma and dAppschains are childchains tethered to the Ethereum root chain. Plasma received significant attention following the release of the corresponding paper by Justin Poon and Vitalik Buterin in August 2017. Nonetheless, the increasing complexities around practical challenges when it comes to implementing Plasma have become a significant concern.

Plasma enables the creation of an unlimited number of transaction-processing child chains (Ethereum mainchain clones) using smart contract and Merkle Tree technology. It is an attempt to create a more flexible state channel that enables many-to-many asset transfers with complex logic, as opposed to just simple one-to-one transfers. Like State Channels, Plasma is completely separated from the Ethereum L1.

One downside of Plasma is the long withdrawal period for users who want to remove their funds from Layer 2. Another is the ‘data-availability problem’. Since Plasma and the child chains are entirely disconnected from the main chain, it creates game-theoretic issues when the Plasma chain and the base layer chain try to sync up about the state of truth. The main chain can never with 100% certainty know the state of any Plasma chain, and thus cannot export its security to any child-plasma chain.

Rollups

Rollups are a relatively new L2 solution being implemented on Ethereum that enable exponential scalability gains without sacrificing security. The primary innovation of rollups are that they move computation off-chain, while storing only the bare minimum of transaction data on-chain. The rollup chain handles all of the expensive and computationally-dense data processing, enabling exponential growth in Ethereum’s ability to execute transactions. Again, in its simplest form, the rollup chain executes the Ethereum transactions on its own chain, “rolls” them up into one batch, and Ethereum receives and stores the results. However, in order to do so, the Ethereum mainnet needs some way to verify that the transactions that happen off-chain are valid. So, how does Ethereum determine that submitted data from a rollup is valid and bot submitted by a bad actor?

The answer is cryptographic proofs, like validity proofs for ZK-rollups (ZKRU) and fraud proofs for Optimistic rollups (OR). Each rollup deploys a set of smart contracts on L1 Ethereum that are responsible for processing deposits/withdrawals and verifying the submitted proofs.

Further, rollups generate a cryptographic proof (called a “SNARK”), and then submit only the proof to the base layer. The “batch” that’s rolled up is periodically posted to mainnet Ethereum and contains the net outcomes of many different transactions as they occurred on the rollup layer. This data is verified and updated by the rollup operator every time the L2 advances its state. Therefore, L2 execution and L1 data update in lockstep.

This removes the burden of data on Layer 1 while also allowing Layer 2 transaction data to be available on Layer 1 for validation. Rollups remove everything from being done on-chain (monolithic) to the Ethereum mainnet now serving as the settlement layer for off-chain L2 interactions (modular design).

Week in Ethereum
Week in Ethereum

A rollup needs orders of magnitude less validators than L1 to maintain its security. As long as a single honest validator does its job, the network will remain secure. Rollups can be thought of as branches off of the main trunk of Ethereum that increase the computation surface area of Ethereum.

From a solely scalability perspective, ZKRs are more performant than ORs because they compress data more efficiently meaning they have a smaller “batch size” when submitting to L1. Optimism, an OR implementation, posts data to the Ethereum L1 for every transaction, however, some ZKR implementations, like dYdX, only post to mainnet to reflect every account balance. Because of this approach, dYdX’s interacts with L1 only ~20% as much as Optimism, equating in roughly 90% reduction in fees.

With rollups, Ethereum can go from ~25 to 3,000+ TPS without sacrificing security. What makes rollups such an attractive scalability technology is the fact that users' funds cannot be stolen or censored (as is the case on some sidechains) and that no one can prevent users from exiting the rollup whenever they wish. Users can always access data on L1 to safely exit the rollup chain with their funds.

In previous sections, EVM-compatible sidechains and their potential benefits to Ethereum were discussed. Similarly, other alternative L1 blockchains like Polkadot, Solana, Cosmos, and NEAR could theoretically become rollups to Ethereum if they created a bridge that adheres to the rollup technical design pattern and post their data to Ethereum. This is a plausible future if alternative L1s fail to distinguish themselves and rollups on Ethereum become cheaper than competing chains.

Despite being extremely nascent, rollups are already significantly reducing fees for many Ethereum users. l2fees.info frequently shows Optimism and Arbitrum providing fees that are ~3-8 times lower than the Ethereum base layer itself, and ZK-rollups, which have better data compression and can avoid including signatures, have fees ~40-100 times lower than the base layer.

https://l2fees.info/
https://l2fees.info/

Scalability is improved on the base layer due to the lack of reliance on Layer-1 storage. The only factor on the scalability of a rollup is how much data the main chain can hold. This is why shards will complement rollups nicely as they increase the data availability of Ethereum (think 64 data centers vs just one). Once sharding is live (2023 or later), there will be an almost 20x increase in capacity, allowing rollups to operate cheaper and faster.

Rollups offer similar capabilities as Plasma, but without suffering from the “data availability problem” (discussed in future sections). Layer 2 rollups batch users’ transactions and post them onchain via calldata. Posting the calldata onchain is what allows Ethereum and its robust, decentralized network of nodes to “check the work” done offchain. Instead of doing the computation, the calldata enables the Ethereum mainnet to quickly and easily verify that everything done offchain was valid and accept the state changes i.e. double-check the work. It also enables users to check the block explorer, like etherscan, and follow their transaction. Additionally, the availability of data on the Ethereum L1 means that any computation completed on a rollup can be redone by the Ethereum base layer, if needed. Without sufficient data availability, transaction execution becomes opaque/ a black box that cannot be audited by the L1.

Rollups improve already!

A new upgrade to Ethereum which targets rollups (EIP-4488) is currently being considered by the Ethereum community and would reduce the cost of posting this calldata onto the mainnet. Rollups offer many-to-many transactions, smart-contract capabilities, and significantly-reduced total L1 block space requirements, all while extending Ethereum’s security assurance to the L2.

Technically speaking, a rollup is a single contract on the main L1 that holds all funds and a cryptographic commitment to a “sidechain” state. The sidechain/rollup is maintained by a small set of operators off-chain without significantly impacting L1 storage. The reason this “small set” does not introduce centralization is because a rollup’s block producer could attempt to act nefariously, but if it does, the Ethereum L1 will simply reject the “bad batch” and financially penalize the bad actor.

Why is Ethereum embracing a “rollup-centric” future?

Sharding, discussed earlier, is L1 scaling and is still years away from being fully implemented. It's far more complex and risky when compared to rollups because it's altering the actual base layer. This means the ~$400 billion network is at risk to any bugs or miscalculations in the rollout of sharding. Meanwhile, rollups are available now and possibly even more powerful. Optimistic rollups are a promising existing scaling technology that can be incorporated (and expanded upon) quickly. They offer developers an easy way to migrate their existing dApps to the rollup chain with a reasonable degree of security/scalability tradeoffs. This alleviates Ethereum congestion and high fees that already exist.

Additionally, the Ethereum community realized that rollups can provide immediate value now and only improve once sharding is implemented. This means Ethereum scaling development is hyper-focused on rollups (plus some plasma and channels) as a scaling strategy for the near to mid-term future.

In all likelihood, in the future, Ethereum users will primarily conduct their activity on a L2 rather than the L1 due to the cheap transaction fees and security guarantees. Meanwhile, the Ethereum mainnet will become a settlement layer for the L2 ecosystem and major ETH whales.

https://www.theblockcrypto.com/data/scaling-solutions/scaling-overview
https://www.theblockcrypto.com/data/scaling-solutions/scaling-overview

Flavors of Rollups

There are two (primary) types of rollups: ZK-rollups (ZKRU) and Optimistic rollups (OR).

ZK-rollups are (theoretically) faster and more efficient than Optimistic rollups, but they suffer from friction and compatibility issues when migrating smart contracts to Layer 2. ZK-rollups submit transactions to the mainnet using a cryptographic verification method called a zero-knowledge proof—more specifically—a zk-SNARK (Succinct Non-interactive Argument of Knowledge). zk-SNARKs allow someone to prove they have a particular piece of information without actually revealing the contents of the information. Popularized by Zcash for enabling anonymous transactions, zero-knowledge-proof technology provides scaling efficiencies for state transitions on the rollup chain that are then submitted to the main chain. This zk-SNARK approach is also called using validity proofs, i.e. highly complex cryptography to ensure all L2 transactions are valid and correct. The proof is submitted and checked by an on-chain L1 contract.

While validity proofs are complex and expensive (relative to Optimistic fraud proofs), verification by the L1 is simple, making them—even still—cheaper than a regular L1 transaction. However, due to the complex computation involved in the validity proofs, special-purpose hardware may be needed to run a node, creating a centralizing effect and less open network.

Optimistic Rollups, meanwhile, are not secured by cryptographic zero-knowledge validity proofs. Instead, ORs “optimistically” assume all transactions are valid but allow for/use dispute resolutions, a withdrawal period, and crypto economic incentives to maintain the integrity of the data. Essentially, it’s an “innocent until proven guilty” model with watchdogs in place.

Anyone may submit a rollup block. However, all other nodes can execute the same transactions, essentially “checking the work” of the submitter. Only one honest actor is needed to submit the fraud proof and challenge any questionable block. This means fraud proofs are not sent with every batch of transactions. Instead, they’re only used when an entity wants to dispute a transaction—e.g. attempt to prove whether there are any fraudulent transactions in a rollup batch.

They sacrifice some scalability for increased compatibility with the Ethereum Virtual Machine. Optimistic rollups run an EVM-compatible Virtual Machine called OVM (Optimistic Virtual Machine) which removes the compatibility issues that exist in ZK-rollups. This is extremely critical as composability is paramount in the Ethereum ecosystem, especially in DeFi. By using a virtual machine called the Optimistic Virtual Machine (OVM), they allow developers to easily deploy code and projects on the secondary chains. On the other hand, there’s no cryptographic proof that the state transition submitted to the main chain is correct.

Monolithic vs Modular blockchain architecture

This traditional monolithic, do-it-all blockchain faces unavoidable limitations due to the inefficient nature of provably-secure, decentralized consensus. These limitations lead to higher transaction costs for its users as the chain becomes more adopted and used. Full stop.

This is because blocks and block space on the execution layer of a chain are scarce. There are only so many blocks that can be verified and added to the chain each second/minute. Once demand outstrips this finite resource, the only recourse users have left to ensure their transaction gets into a block (and executed) is to pay more than the market rate for transaction fees.

Monolithic refers to a blockchain in which every node performs all parts of the blockchain: execution, consensus, and data availability. Execution refers to the computation of transactions. It is the user-facing layer where transactions get executed. Consensus refers to ordering transactions and nodes coming to agreement on the state. Data availability guarantees blocks are fully published to the network. The consensus layer plus data availability guarantees all blockchain data is published and accessible for anyone.

Monolithic blockchains, essentially all blockchains pre-2018, tackle all of these functions on its own. This means potential scalability is ultimately constrained by what the weakest node in the system can process. Whereas, modular blockchains separate these layers, dividing the total work done among the different specialized layers/nodes so that, on net, more total throughput is produced compared to what any individual node could have processed e.g. a divide-and-conquer approach.

Delphi Digital
Delphi Digital

Because they are limited by node performance, in order to scale, monolithic chains typically move to more specialized, highly-performant nodes. However, with this approach, the performance gains are eventually offset by the reduction of network particpants/governance into a select few capable of buying and running such sophisticated nodes.

Modular chains also provide potential improvements over monolithic chains in terms of networkd fees. Within a monolithic chain, all transactions compete for the same blockspace independent of the rest of the chain’s activity. This has been observed in the past with specific NFT releases clogging the entirety of the Ethereum mainnet. However, a modular approach can optimize for different applications and thus more efficiently price resources. NFT mints could occur on their own rollup and then batched to the L1. One live example today (as of Q1 2022) is the DEX, dydx (discussed more below), which runs on Starkware’s StarkEx, handling trades off-chain in order to increase throughput by ~600x and transaction cost by ~80%.

Like everything in computer science and blockchains, all improvements comes with a new tradeoff. Implementing a modular approach to your favorite blockchains introduces a new issue known as data availability. Data availability is simply the ability for transaction data to be made available for nodes to download. Remember, rollups execute transactions off-chain, “roll them up,” and submit the batch back to L1. They are able to take this scalability approach because they always make the rollup data “available” on-chain (with a proof), inheriting the security of the mainnet. Therefore, with rollups, the issue is no longer how many transactions can be executed but how to enable sufficient DA on the L1 to ensure the security of the rollups.

Data Availability

Pre-2021 (roughly speaking), data availability (DA) for most blockchains was not a concern for two reasons: one, most blockchains did not have enough usage to warrant any concern and, two, the monolithic approach meant that each (full) node downloaded the entire block to check availability. However, as discussed previously, this approach has its limitations/drawbacks and, thus, new solutions like light clients, rollups, and the modular approach were implemented.

As a reminder, full nodes download and validate every transaction that has ever occurred on the chain since its genesis. Light nodes (typically) only check block headers. This means light clients are “light weight” nodes that require less computing resources than a full node. This makes them more egalitarian because they are cheaper and typically easier to run, helping to further decentralize the network. However, because light nodes follow whatever the majority commits to as a valid transaction (rather than verifying for itself), light nodes must have way to ensure that valid blocks are being published.

Data availability is critical in this regard because as long as all the execution data is made available on the mainnet, the chain does not require every node to execute every transaction in order to validate transactions and reach concensus.

Because rollups can cryptographically-guarantee (via a proof) that the transactions are valid, these transactions can now be executed by just a single node and posted to the L1 where it can be cross-checked by L1 nodes. All L1 nodes download the rollup’s data but only a certain portion of them execute the transactions/construct the rollup state, thereby reducing overall resource consumption. Additionally, the data within a batch is highly compressed prior to being submitted to the L1, further decreasing the resource burden. This is how rollups help trustlessly scale a blockchain without requiring an increase in node resources.

However, a rollup’s TPS is dependent on the data capacity of their L1 for throughput. The more data capacity on L1, the higher the (theoretical) throughput for rollups. Once an L1 runs out of data capacity for the rollup, the limit has been reached and no additional transactions can be proceeded. Therefore, now the limiting factor for a blockchain’s scalability is its data availability.

To address this issue, several new specialized data availability chains have launched/are being built. These chains are built to serve solely as a DA/shared security layer for rollups by maximizing the DA capacity. Examples such as Celestia and Polygon Avail only provide high data capacity the rollups simply optimize execution.

In summary, data availability is extremely important for new modular blockchains for two reasons. First, adequate DA is required to ensure rollup sequencer’s submissions can be cross-checked and challenged if needed. Secondly, DA is now the bottleneck to a blockchain’s scalability. Maximizing the DA on a L1 is critical for rollups to reach their full potential.

Deep dive into different rollup implementations

Optimistic rollups (OR)

It’s important to remember that while rollup technology can be quite technical, at its core, an Optimistic rollup chain is simply a smart contract on mainnet Ethereum with some number of block producers that watches for transactions, batches them together into one string of data (rollup), and then posts it back to Ethereum mainnet with a signature attesting to their validity.

An optimistic rollup moves the heavy computation and data storage that would be normally executed on L1 Ethereum off-chain to a new rollup network. Only a small portion of each batch of transactions is ultimately recorded on the mainnet, creating a much smaller computational impact on the L1. Since only one small data portion is registered on L1 and the majority of computation is handled off-chain, fees can be greatly reduced (compared to if the entirety of the transactions were executed on L1).

By default, Optimistic rollups “optimistically” assume submissions are valid. However, that’s not always the case. To combat this seemingly reckless optimism, checks and balances are put into place. There’s a period of time after withdrawals where anyone can identify and dispute transactions they believe are incorrect or fraudulent. If the whistleblower can mathematically prove that fraud occurred by submitting the correct fraud proof, the rollup will revert the fraudulent transactions, penalize the fraud, and even reward the watcher.

The ability to post L2 transaction data to the L1 is critical because it enables everyone to reconstruct the current and historical state of the rollup chain. Many other scaling technologies do not have this ability and therefore are less powerful to a user who has been wronged.

The drawback to this system is the delay when users move funds between the rollup and Ethereum and for transactions to be considered final. Because “watchers” need time to detect fraud, users’ funds typically take a week to be withdrawn and available for further use. ORs can only be considered safe with a ~one week challenge window. These dispute windows are expected to come down over time and, in fact, some third-party solutions already exist to remove this delay entirely. To provide the user instant withdrawals, a third-party that is constantly verifying the chain will offer to buy the user’s withdrawal for a small fee and then pay the user on Ethereum L1. In this scenario, the user gets their funds immediately and third-party earns a fee for having to wait for the block to be finalized. These same solutions are available for most ORs and specific implementations, as well as, other bridge solutions are discussed in sections further below.

Unlike the sidechains discussed previously, the breakthrough for rollups is simply increased scalability without sacrificing user security. OR chains are secured by Ethereum L1. Users could be inconvenienced if a dispute or fraud situation arises, but their funds are always safe. Sidechains, like Polygon for example, are secured by a separate validator set that may be (definitely are) less secure than the Ethereum network. Additionally, the bridge that connects sidechains to Ethereum are typically highly centralized around just a few individuals. If less than 10 people are compromised, all funds could be vulnerable.

When discussing rollups and any L2, user also have to consider how long it takes for their transaction on the rollup to be submitted and considered final on the L1. This is known as time to finality. When it comes to rollups, ZKRs post very complex proofs that can range from 500k-5M gas, whereas ORs are ~50k gas, or 10-100x smaller. Therefore, OR can provide faster L1 finality when compared to ZKRUs (for the same cost).

One final advantage of OR vs ZKRU is the OVM: Optimistic Virtual Machine. OVM enables (almost) anything that is possible on Ethereum mainnet to be possible in the OR. Smart contracts, and therefore dApps, are easily transferable to the OR because the OVM supports writing code in Solidity.

In November 2021, Optimism PBC announced “EVM equivalence,” the complete compliance with the Ethereum’s technical specification. This means that everything that currently exists and works on the Ethereum stack can now easily be integrated with Optimism’s OR. This should drive tremendous network effects to Optimism as it's now trivial for current projects to launch on the OR. By reducing the friction, developers and users alike can now enjoy the benefits of OR.

OR have another advantage—this time over plasma and state channels. ORs have a simpler fraud-proofing procedure and one in which anyone can submit a dispute. All the data needed to submit a fraud proof is available on L1.

Arbitrum (by Off-chain Labs) and Optimistic Ethereum (by Optimism) are the two primary OR projects on Ethereum. However, both implementations are still in their very early stages with centralized companies (mostly) responsible for their success or failure. Both have plans to decentralize over time, but any timeline estimate is simply a guess.

Both Arbitrum and Optimism launched in 2021, albeit both with self-imposed limits and restrictions in case any bugs were encountered. Over time, more battle-tested and less constricted versions will be released, further reducing fees for users. Currently, neither Optimism nor Arbitrum One have implemented data compression, which, when fully released, could reduce fees by ~10x. A big step forward happened for Optimism who launched its latest upgrade OVM 2.0 and Arbitrum’s next upgrade ‘Arbitrum Nitro’ promises to increase speed and reduce costs.

It’s estimated that once mature, optimistic roll ups can offer anywhere from a 10–100x improvement in scalability and, at full scale, can possibly improve Ethereum transaction fees by ~50x.

However, as promising as rollup technology is, it’s still a very new technology not without risk. Arbitrum One, a specific kind of Optimistic rollup discussed later, experienced downtime for around 45 minutes in September 2021 when a bug caused a large burst of transactions to overload the system. Optimism (OΞ), another Optimistic rollup chain, also experienced a temporary outage (~one hour) in November 2021 in which its L2 transactions were halted. No funds were at risk during either issue (the beauty of L2s!), but processing new transactions was not possible, making them useless until the matter was resolved.

One obvious note is that both Optimism and Arbitrum lack native tokens. It’s not public knowledge whether either intend to eventually launch tokens, but the general trend in the crypto industry would suggest so. Regardless, both have had to try and bootstrap their rollups without lucrative airdrops or incentive programs (yield farming). In an industry awash with 50%+ APY, five-figure airdrops, and 8-figure incentive programs/funds, rollups, thus far, have chosen to try and grow without a token, making adoption an uphill battle.

Pros/Cons of ORs Generally

Pros:

  • Increase in scalability of ~2,000 TPS, reducing transaction costs by >5x
  • Superior compatibility with Ethereum mainnet, less friction for developers to deploy projects (e.g. EVM equivalence), can create and ship faster than ZKRU
  • Flexibility in generalized computation (Turing-complete / EVM compatible)
  • All data is available on-chain (no need to trust off-chain data providers) Computationally less expensive than ZKRU

Cons:

  • Fewer TPS when compared to ZK-rollups
  • Relies on crypto-economic incentives and “watchers” rather than mathematically-certain security (fraud proof vs validity proof)
  • Users (technically) need to wait 1+ week(s) for dispute period after a withdrawal from the rollup before being able to access funds

Additionally, ORs and their challenge period are susceptible to 51% attacks. In this scenario, the attacker would try to introduce “bad” transaction data into the rollup and then attempt to censor any attempts to challenge it during the challenge period. The attacker is ultimately trying to corrupt the state of the rollup (with fraudulent data for their own self interest) and stop anyone from challenging the submission.

This is why an adequately lengthy withdrawal/challenge period (one to two weeks) is needed. An attacker may be able to censor or sneak a transaction through if the window was short enough, but the longer the window, the harder it is to fool the rest of the chain.

Optimism OΞ

Optimism is a Public Benefit Corporation (PBC) that created Optimistic Ethereum (OΞ), a leading Optimistic rollup on Ethereum. Optimism was formerly known as the Plasma Group but has since changed its name and even raised funds from the likes of a16z. The project aims to create a seamless L1-to-L2 developer experience by enabling (nearly) “copy and paste” code from one layer to the next, thanks to its OVM. OVM stands for Optimistic Virtual Machine and is the virtual machine that executes all transactions in the rollup.

As mentioned, the Optimism team is working towards “EVM equivalence” with the next upgrade, Optimism 2.0, which enables the OVM to be equivalent to the EVM in all technical aspects. Developer tools like smart contract libraries, Hardhat, and Solidity tooling will work natively on OVM 2.0. Additionally, current dApps live on mainnet Ethereum can be ported over to the L2 with no changes necessary! Even better, Optimism can (eventually) reduce the Ethereum L1 gas fees for these vary same dApps by ~10,000% and increase the transactional throughput by ~200x.

Optimism launched with controlled rollout where a whitelisted group of dApps were approved to launch, most notably Uniswap, Synthetix, and 1inch. This limited release hampered Optimism adoption early on as it had onboarded only 6 dApps compared to ~60 for Arbitrum. However, on December 16th, 2021, the Optimistic team removed the developer whitelist for a full, open system which will allow all dApps to begin building on Optimism if they so choose. Top projects like Balancer and Tornado Cash have already moved over.

How OE works

In the case of Optimism, transactions are sent to the rollup chain where they are received, reviewed, and executed by Sequencers. Sequencers take on this responsibility because they are rewarded for properly executing transactions. However, if a Sequencer acts maliciously or attempts to push through invalid transactions, they will be punished by having their staked funds slashed.

As mentioned previously, ORs rely on fraud proofs meaning at least one honest actor must identify a fraudulent transaction and challenge it, otherwise, it will be accepted on the chain. If anyone suspects fraud, they challenge it with the adjudicator contract on L1 mainnet. The adjudicator contract can verify the validity of the Sequencer’s results via the Optimistic Virtual Machine (OVM). If indeed, the Sequencer’s transaction/submission is invalid, a fraud proof is generated and the Sequencer’s funds are slashed. A portion of the slashed funds is awarded to the whistleblower. This is what gives users the incentive to monitor transactions and detect potential fraudulent blocks.

Pros

  • EVM-equivalence and better developer experience (existing tooling and programming languages)
  • Easier dApp migration for existing dApps (L1 to L2), can create and ship faster than ZKRU All data available on-chain
  • Computationally less expensive than ZKRU

Cons

  • Less theoretical/maximal throughput vs ZKRU
  • Centralized sequencer
  • Longer withdrawal period Fraud proofs mechanism not published yet

Resources
Block explorer - Optimistic Etherscan
Native bridge - Optimism Gateway
User guide
Live applications portal
Dashboard
ByBit (centralized exchange) bridge/on-ramp
Add Optimistic Ethereum Network via MetaMask

Arbitrum

Arbitrum is an optimistic rollup L2 built by the Offchain Labs team. The currently-live implementation is called Arbitrum One and utilizes fraud proofs, on-chain calldata availability, a ~1 day withdrawal period, and a special type of node called a sequencer. Offchain Labs currently operates Arbitrum's sequencer, which has the ability to control the ordering of transactions. This early-stage centralization was mentioned previously and is not solely applicable to Arbitrum.

Arbitrum boasts a shorter one-day withdrawal period compared to Optimism’s 1-2 weeks, but the tradeoff is that disputes on Arbitrum take longer to resolve. So, the majority of the time, Arbitrum withdrawals are quick and easy, but on the rare occasion that a transaction is challenged, Arbitrum has some added complexities when compared to Optimism. To withdraw from Arbitrum, a user first submits the withdrawal transaction on the rollup. Once the transaction is finalized on L1 (~1-7 days), the user’s funds are free to claim with another L1 transaction (requiring a Merkle proof).

While both are optimistic rollups, Arbitrum has some key differences from its counterpart, Optimism. One critical difference is Optimism OVM 2.0 is EVM-equivalent, running directly inside the EVM, while Arbitrum One is only EMV-compatible. This reduces code complexity and audit surface for Optimism. Arbitrum's AVM lacks EVM-equivalence because it’s consciously optimized for more compact fraud proofs, but at the expense of implementation complexity.

Both are still incredibly easier to work with for developers than ZK-rollups, but the Optimism EVM equivalence reduces all friction.

Another critical difference is that Arbitrum puts less data on the L1 as it executes many transactions between L1 postings vs Optimism requires that a state hash is posted after every transaction, whereas Arbitrum executes several transactions before requiring the state hash to be posted. This can account for up to ~4x difference in storage on-chain.

Arbitrum One is currently the L2 network that has the highest TVL. For an overview of the Arbitrum ecosystem of applications, see the Arbitrum Portal. Binance, Huobi, Crypto.com, and FTX have open withdrawals to Arbitrum, becoming some of the first exchanges to open an on-ramp to Ethereum’s layer 2. Additionally, Arbitrum has partnered with Chainlink nodes and oracles to provide its validation services. This is a positive as Chainlink is already utilized in hundreds of Ethereum L1 projects and will bring the same security and composability to L2.

While Arbitrum is off to a hot start, it is not without its issues. In January 2022, the Arbitrum rollup network came to a halt. Offchain Labs released a post-mortem explaining the issue was due to the main sequencer experiencing a hardware failure during a software upgrade. This issue cascaded down the system, preventing even the redundancy measures in place from working. Eventually, the issue was corrected and it is back to full functionality. During the downtime, no funds were at risk (thank you rollup!) but no transactions could be executed including deposits and withdrawals.

Pros

  • EVM compatibility
  • ~1 day withdrawal period (under normal circumstances)
  • No whitelisted rollout, enabling more dApps to be deployed early on
  • Non-custodial and Ethereum wallet compatible

Cons

  • Currently uses centralized sequencer which carries front-running risk (has priority in submitting transaction batches and ordering transactions).
  • Less composability with EVM than Optimism
  • Complexity switching between rollups and sidechains while guaranteeing high security

Key tools

Others Optimistic Rollups

Boba
Boba is another L2 Ethereum Optimistic Rollup scaling solution built by the OMG Foundation, which originally began as a fork of Optimism and the OVM (optimistic virtual machine). Boba offers fast withdrawals backed by community-owned liquidity pools (similar to other bridge solutions discussed below), reducing the challenge period from ~7 days to minutes, while incentivizing Liquidity Providers (LPs) with yield-farming opportunities. The team plans to completely rewrite the codebase for their upcoming v3 which is set to be rolled out on mainnet in the coming months. Boba is production-ready with a functioning bridge and a native DEX called OolongSwap.

The BOBA token is a used for governance of the Boba DAO and protocol staking. In order to participate in governance votes a user must stake their BOBA, ensuring one vote for every staked token. Token holders can vote themselves or delegate their votes.

In addition to participating in governance, staked BOBA accrues a portion of the transaction fees earned by the network although the specifics surrounding the actual fee-sharing split is still up for a vote. Until then, staking rewards are are being supplemented by the BOBA treasury.

BOBA can be bought and/or traded on several exchanges including FTX, Poloniex, Bitfinex, and others.

https://boba.network/token/
https://boba.network/token/

Resources

Metis
Metis is an L2 scaling solution on Ethereum that is best described as an EVM-compatible, sharded Optimistic rollup features a Dynamic Bond Threshold staking design (a mouthful, right?). It launched its mainnet, Andromeda, in November 2021. The METIS token is used to pay transaction fees on the rollup, stake to become a sequencer, and for incentives for fraud challenges.

The Dynamic Bond Threshold staking design dictates that Sequencers cannot sequence blocks if their stake is worth less than the amount they are sequencing. Transactions are blocked until an eligible sequencer is found, limiting the size of the transaction value.

The Metis Virtual Machine (MVM) contains various decentralized autonomous companies (DACs) with their own separate, application-specific computational and storage layers. Additionally, a network of sequencers is randomly selected from the DACs to rollup and submit transactions back to the L1. These parallel sequencers enable higher scalability compared to the single-party approach by other ORs.

https://twitter.com/ahboyash/status/1475798380114694146
https://twitter.com/ahboyash/status/1475798380114694146

Despite the separate execution layers, liquidity between the shards can flow frictionlessly due to the MVM cross-layer communication protocol. The goal is to scale horizontally with distinct, application-specific execution layers that are while also preserving the security of Ethereum via fraud proof submission to mainnet.

Finally, Metis will also implement a fraud detection system called the Ranger system. In essence, the Ranger system is a network of nodes that monitor sequencers for bad behavior. The Rangers constantly check the fraud proofs for validity. This active “watchdog” system results in a shorter challenge period since some simultaneous effort to detect fraud has already occurred and not just assumed to be valid. The Ranger system looks to be integrated into the Andromeda network in Q2-Q3 2022.

With the launch of the Ranger System comes the beginning of mining rewards for Rangers. Five million METIS tokens have been reserved for these incentives and are designed to be distributed over nine years.

The token distribution is as follows:
Founding Team: 7%
MetisLab Foundation: 4%
Advisors: 1.5%
Investors: 18%
Airdrop: 6%
Liquidity Reserve: 6%
Community Development: 9%
The remaining 47.7% of the lifetime token supply will be minted over the next ten years.
Total supply after 10 years: 10M

https://messari.io/article/optimistic-about-metis
https://messari.io/article/optimistic-about-metis

Pros

  • Parallel sequencers
  • Withdrawal period could (theoretically) take minutes (rather than days)
  • Plans to inherit Optimism’s EVM Equivalence

Resources

ZK-rollups (ZKRU)

ZK-rollups (ZKRU) are separate blockchains networks with very few specific nodes (called Provers). Sounds like other alternative L1 chains, right? However, ZKRU that have a cryptographic proof that links them to Ethereum’s mainnet. This link prevents the rollup from censoring or stealing funds while maintaining the immutable properties of the Ethereum L1. This proof is called a validity proof, ensuring the validity of the off-chain transactions, making them instantly verifiable, and removing the need for a withdrawal/challenge period.

ZKRUs improve scalability by moving computations and storage off-chain where computation is expensive. They separate the transaction execution from the consensus and data availability. To submit the transactions onto the consensus layer, ZKRUs then cryptographically prove every batch of executions on the rollup and send only the proof to the L1. Zero-knowledge cryptographic proofs reduce the computing and storage resources for validating the block by reducing the amount of data held in a transaction; zero knowledge of the entire data is needed. However, because every transaction in the rollup still stores its input data (calldata) on the L1, all L1 nodes have the information they need to verify the transactions if needed.

Remember, rollups batch together large amounts of off-chain transactions, compress them into a single transaction, and eventually find their way to the Ethereum L1. Because ZKRUs do not assume all transactions are valid, validity proofs must be sent with every ZK-rollup batch to cryptographically prove the validity of transactions. While a bit more technically cumbersome, this means that transactions are final once they are validated by the settlement layer.

To describe the process in detail:

  • Highly-compressed batch of transactions are combined together with the current state root
  • Combination is sent to an off-chain Prover
  • Prover computes the transactions, generating a validity proof of the results
  • Prover then sends this to an on-chain Verifier (Ethereum nodes)
  • Verifier verifies the validity proof
  • Smart contract on Ethereum’s L1 that maintains the state of the Rollup is updated to the new state
Source: EatTheBlocks
Source: EatTheBlocks

In traditional L1 blockchains, more transactions lead to more expensive fees due to limited block space getting filled up. However, for a ZKRU, the opposite is true! ZK-rollups work off of economies of scale, meaning more transactions makes the network cheaper to use. This is counterintuitive to a typical blockchain but is possible because the costs are amortized across all participants. Verifying the validity proof on Ethereum has a certain cost and, as the number of transactions included in a rollup batch grows, the cost to verify grows slower than the number of transactions added (exponentially slower). Therefore, the more users, the more the cost is spread around.

On mainnet Ethereum, each transaction is executed by every node. With ZKRU, only one node needs to actually do the computation (the Provers) and then produces a zero-knowledge proof of it. As mentioned prior, Provers are a select set of nodes in charge of computing all of the transactions and aggregating them into a zk-SNARK. Because of the complicated computations involved, the Provers run on dedicated hardware, making them more centralized and opaque. The good news is that because of the validity proof, it’s mathematically impossible for them to submit fraudulent data. The only trust involved is trust in cryptography/mathematics.

Because of this approach by zkRollups, users can be assured that:

  • Validators cannot alter the state and/or steal user funds.
  • Users’ funds are always available/retrievable from the ZKRU smart contract
  • No one is needed to constantly monitor transactions/blocks in order to prevent fraud.

Then, every other node (Verifier) simply verifies this proof instead of having to do the full computation. The proof allows each node to verify the provided state is valid. Verifying the proof is much less intensive than actually computing it, which is where the scalability improvements are created. Therefore, Verifiers don’t need special high-end hardware to verify the proof. They simply use their existing hardware, creating no new stress or burden for current nodes. Only state transitions and a small amount of calldata need to be processed and stored by the nodes. With this system, nodes can easily agree on a common state and it puts the burden of execution on a single node instead of the whole network.

Beyond simply the scaling benefits, ZKRUs are doubly-impressive due to their economic security guarantees. In a ZKRU, rollup operators must submit a Zero-Knowledge Proof (SNARK) for every state transition that then gets verified on the mainchain. This SNARK proves, by using world-class cryptography and math, that the batch of transactions (and their net state changes) are valid. Thus, it's impossible for the operators to commit an invalid or manipulated state. It is strictly not possible for operators to steal the funds or corrupt the rollup state.

ZKRU relies on the censorship-resistance of L1 only for its liveness, not for its security. There is no need for anyone to monitor the ZKR. After a block is verified, user funds are always guaranteed to be eventually retrievable, even if operators refuse to cooperate.

Thus, ZKRUs embody the original ideals of cryptocurrencies and the cypherpunks that created them. They remove the need for trusted parties and replace them with cryptography and game-theoretical incentive alignment.

Another benefit of ZKRUs is that SNARKS can prove all of the computation is correct without having to actually reveal the details of the transactions! Zero-knowledge technology allows for someone to prove something while not revealing the contents of that information. For example, ZK-SNARKs enable Joe to verify Sally’s banking information using a zero-knowledge cryptographic proof instead of revealing the confidential information to Joe.

Pros

  • Even greater scalability and transaction cost reduction benefits compared to Optimistic Rollups

  • Less data contained in each transaction increases throughput and scalability of layer 2

  • Does not require a fraud dispute window like in Optimistic Rollups, reducing the withdrawal times from ~2 weeks to a few minutes;

  • Enabling privacy by default.

    Cons

  • The difficulty in computing zero-knowledge proof will require data optimization to get maximum throughput

  • The initial set up of ZK-rollups promotes a centralized scheme

  • More difficult to initially build and integrate into the Ethereum network than Optimistic rollups

Delphi Digital
Delphi Digital

Rollups you can try today:

The ZK-rollup ecosystem is nascent but growing with multiple companies working on several implementations. Some prominent companies include Starkware, Matter Labs, Hermez, and Aztec

 

Rollups in the near future

Starkware

StarkEx

Starkware is a ZK-rollup company that pioneered zero-knowledge-based rollups in 2018, launched StarkEx in 2020, and recently released StarkNet in November 2021. StarkEx is a ZK-rollup with less functionality than its StarkNet successor. StarkEx supports the ability for smart contracts to run any arbitrary logic for specific use cases like trading and NFTs while StarkNet enables more general use cases.

StarkEx is the first iteration “scalability engine” from Starkware that now supports two versions: a zkRollup mode and a validium mode. Validiums (discussed more in later sections) can be custom-tailored to offer superior performance for specific dApps and use cases like DeversiFi, ImmutableX, and Sorare.

https://starkware.co/
https://starkware.co/

StarkNet

StarkNet is Starkware’s next iteration of a ZK-rollup and is the first ZK-rollup to feature general smart contracts on a fully composable network. Composability refers to the ability for applications to coordinate, build on top of one another, and interconnect—something for which StarkEx is not designed.

As another example of an Ethereum L2 ZKRU, StarkNet uses zero-knowledge proofs to achieve fast transaction times and hyper-scaling without compromising security. An alpha version of StarkNet was launched in November 2021 with limited capabilities to allow developers to begin building on top of the protocol. StarkNet is designed so that it benefits from economies of scale, i.e. the greater the number of transactions in a batch, the less gas each participant in the batch must pay.

Under the hood, StarkNet compresses thousands of transactions into a single validity proof called ‘STARKs’ (Scalable Transparent Argument of Knowledge), co-invented by Starkware President Eli Ben-Sasson, that is submitted to the Ethereum L1. Starkware’s STARK technology has two primary advantages over SNARKs (used by ZKRU competitor ZkSync): 1) It doesn't require an initial trusted setup (like in zkSync v1), and 2) they are ~10x faster to compute than SNARKs. Because the computational effort necessary to verify STARK proofs is significantly less than actually proving the computation, StarkNet can increase Ethereum scalability by orders of magnitude.

https://twitter.com/sdyshi/status/1476041039232126978
https://twitter.com/sdyshi/status/1476041039232126978

StarkNet’s L2 node (sequencer) will execute every transaction and update the state to the Ethereum mainnet periodically. It's important to note that StarkNet transaction finality is tied to L1, meaning the L2 node must validate StarkNet and Ethereum simultaneously. StarkNet introduces a solution involving checkpoints to the Ethereum mainnet, enabling it to achieve effective finality on the rollup side very quickly. Therefore, all L2 nodes incorporate an L1 full node.

Additionally, since the state transitions are “STARK-approved” by the sequencer, it is mathematically/cryptographically impossible for fraudulent transactions to be accepted on mainnet Ethereum. This removes the need for any “challenge” period that exists in ORs. All the data needed to reconstruct the full StarkNet state is published on-chain.

Application deployment on StarkNet in the future will be permissionless so anybody can write smart contracts and publish them on the testnet using Cairo, the native programming language. However, currently, Starkware is in control of the sequencer and all of the transactions are verified by Starkware cloud servers. Thus, StarkNet is not currently a permissionless system. However, Starkware aims to create a decentralized sequencer set in the future.

SNARK Protocols:

  • Loopring
  • Polygon Hermez
  • ZKSync
  • ZKSwap

SNARK Pros

  • Smaller proof size
  • Smaller verification time
  • Bigger developer community and libraries (longer in the game)

SNARK Cons

  • Require trusted setup (honest participants needed)
  • Longer prover time
  • Not quantum-resistant

STARK Protocols:

  • Immutable X (StarkEx) - partnership with GameStop
  • DYDX (StarkEx)
  • Starknet - DEX/AMM StarkSwap Q1 2022 launch
  • Polygon Miden

STARK Pros

  • Quantum resistant
  • No trusted setup required
  • Vocal support from Ethereum foundation
  • More scalable in terms of computational speed

STARK Cons

  • Far larger proof size = more gas
  • Smaller developer community due to the nascency of STARKS

Starkware Roadmap

StarkNet utilizes a new programming language developed by the team at Starkware called Cairo. It’s a language for programming STARKs that achieves Turing-completeness with STARKs. A breakthrough of the Cairo language is that it enables just one verifier to use a single proof to confirm the integrity of many different program executions. This has the effect of amortizing costs across separate dApps, e.g. a single proof that includes both dYdX trades and SoRare transactions.

Starkware has commented, generally, that their plan with the StarkNet rollout will follow a similar path to that of Optimism (OR): Launch the network with a single sequencer and a limited whitelist of dApps early on to control the launch and limit any risks. A list of projects building on StarkNet can be found here. Ultimately, Starkware hopes to grow the ecosystem into a Starknet “Universe” while also decentralizing the network, nodes, and infrastructure.

https://medium.com/starkware/fractal-scaling-from-l2-to-l3-7fe238ecfb4f
https://medium.com/starkware/fractal-scaling-from-l2-to-l3-7fe238ecfb4f

The Starkware team has also stated that while they do not currently have a token, it’s their aim to decentralize StarkNet in the future. Launching a governance token similar to many other projects is one way in which they could do so.

As of Q1 2022, despite much fanfare, StarkNet remains in its early alpha phase with still much to prove at scale. Early disruptions and issues with its gradual rollout are likely. Despite this, Starware and OKEx announced in December 2021 a partnership designed to enable easy onboarding to StarkNet from OKEx sometime in 2022. Additionally, Argent, an Ethereum smart contract wallet, also announced ‘Agent X’, the first wallet for StarkNet in Q4 2021, and AAVE currently has a governance proposal to launch on StawkNet under consideration.

Since its creation in 2018, Starkware has raised $111 million across three equity rounds and received a $12 million grant from the Ethereum Foundation. Not only is the project backed by heaps of money, but prominent crypto and financial figures including Pantera Capital, Sequoia, Founders Fund, DCVC, Paradigm, ConsenSys, Multicoin, Polychain, Vitalik Buterin, Naval Ravikant, and others have also invested in Starkware.

Starkware Pros 

  • Increased TPS compared to ORs (~9,000+ TPS on Ropsten testnet) 
  • Faster withdrawals (no challenge period), enabling better capital efficiency and liquidity
  • Volition (discussed below) unlocks even greater scalability gains for those that choose to make the trade-off on security

Starkware Cons

  • Developer UX and porting of dApps from L1 to L2 is more cumbersome and less friendly than OR options
  • Cairo language less popular among developers, meaning less talent pool to build on Starkware
  • With Starkware's Validium option, there's a technical challenge in solving data availability problem. In particular, the trade-offs between transaction latency and transaction cost and the trade-off between making data available on-chain.**
    **

Important Links

Validium and Volitions

Validium’s mechanism is nearly identical to a ZK-rollup with the only difference being that data availability in a ZK-rollup is on-chain, while Validium keeps it off-chain. This means ZK-rollups post data on the layer one blockchain itself while Validiums post validity proofs on-chain, but the data remain on a separate network. This enables Validium to achieve considerably higher throughput than ZKRU or ORs. By sending data off-chain rather than on-chain, it reduces the cost of each transaction and increases the transactions per second (TPS).

https://twitter.com/sdyshi/status/1476245389837680640
https://twitter.com/sdyshi/status/1476245389837680640

By keeping data off-chain, Validiums also offer privacy benefits as users’ transaction and balance information is stored with the validium operator instead of publicly on the blockchain. However, because transaction data is not published on-chain, users are forced to trust an operator to make the data available when needed. This key difference makes Validiums more akin to a highly performant, custodial PoA system where Validium operators could freeze, but not steal, users' funds.

The trade-off for storing data off-chain is that it requires trust in the third party who could prevent users from accessing their balances. Starkware aims to solve this with a Data Availability Committee (DAC), a committee of 8 independent members that have their own copy of the transactions made. They are also required to maintain this data by making it available at all times. If an operator prevents a user from accessing their funds, a committee member can override them to confirm their request if it is valid. Examples of where Validium is used: Loopring (LRC) and StarkEx.

Multiple projects launched Validiums using Starkware’s StarkEx platform, including:

Loopring

Loopring is a ZK-rollup Validium that was the first ZKRU DEX to deploy to the Ethereum mainnet in December 2019. As of Q1 2022, Loopring has over $350 M in total locked value in the protocol and can achieve throughput ~1000x greater than Ethereum mainnet for trades.

Loopring is the only ZKRU protocol with its own smart wallet, Counterfactual Wallet, with direct fiat on-ramp to its L2. The wallet allows users to store their crypto assets, mint NFTs, trade, and conduct L2 payments. Loopring was also one of the first rollups to launch its own native token (LRC) during a 2018 airdrop.

One disadvantage of Loopring is its use of Groth16 vs zkSync’s PLONK (discussed below). The downside to this is the Loopring protocol requires a trusted set-up for certain upgrades and protocol changes. PLONK requires just one trusted setup at genesis and STARK solutions, like in Starkware, requires no trusted setup.

Volitions are a ZK-rollup and Validium hybrid solution that enables users the ability to choose for data availability either on-chain or off-chain, i.e. either via Ethereum or through validiums.

Source: Starkware
Source: Starkware

Matter Labs/zkSync

zkSync v1

zkSync is an Ethereum ZK-rollup by Matter Labs founded in 2018 by Alex Gluchowski. In June 2020, Matter Labs released zkSync v1.0 was released on the Ethereum Mainnet where users can deposit ETH onto the network and send payments between other zkSync accounts for much lower transaction fees. It is a standard L2 ZK-rollup scaling solution, in the sense that all funds are held by a smart contract on Ethereum mainnet, computation and storage are performed off-chain, and every batched/rollup block generates a zero-knowledge proof which is verified by L1.

However, because zkSync uses SNARKs (PLONKs specifically and discussed more below), it is slower than its STARK counterpart used by Starkware and is reliant on a trusted setup at genesis. That means the entirety of the zkSync ecosystem is dependent upon a trusted ceremony conducted in 2019. The good news is the system is 100% provably secure if even just one participant was honest.

The ceremony included over 200 well-known and public crypto figures including Matter Labs, Vitalik Buterin, Ethereum Foundation, Consensys, Argent, and many others. The trusted setup and future zksync v1 protocol are secure if at least one participant is honest. Therefore, it’s likely this trusted setup is not an issue and was not compromised. Additionally, zkSync is unable to move or steal user funds.

At its core, zkSync consists simply of a server, a prover, and a verifier. The server and prover run on the L2 rollup chain while the verifier exists on mainnet Ethereum.

https://cryptoshine.medium.com/zksync-an-introduction-and-how-it-solves-the-ethereum-scaling-problem-b8688ec7eb7b
https://cryptoshine.medium.com/zksync-an-introduction-and-how-it-solves-the-ethereum-scaling-problem-b8688ec7eb7b

The server is responsible for collecting new transactions, creating a new block, and sending it to the prover. Next, the prover generates a mathematical (validity) proof for the block of transactions, attesting to the new state of the system.

Then, the validity proof is submitted to back the server which sends the validity proof AND transaction data to the verifier on mainnet Ethereum. Once the verifier approves of the submission, the new state is committed to the blockchain and finality is achieved.

In zkSync v1, transactions have two cost components:

  • Off-chain (storage and prover costs): the cost of the state storage and the proof generation.
  • On-chain gas costs: The validator must pay gas, for every block, to verify the proof and publish the state.

In November 2021, Matter Labs raised $50M in new Series B funding to build out the zkSync infrastructure. Leading the round was a16z and included existing investors Placeholder, Dragonfly, and 1kx. A second financing round was closed with strategic partners such as Blockchain.com, Crypto.com, Consensys, ByBit, and Covalent. The raise built on a previous $6M Series A round in Feb. 2021.

In true crypto fashion, in January 2022, BitDAO and Matter Labs established zkDAO, a $200M DAO dedicated to growing the zkSync ecosystem. According to the proposal, 70% of the funds were allocated towards strategic growth, 10% for research and education, 7.5% for grants, 7.5% for security/audits, and 5% for operations.

zkSync 2.0

In April 2021, the ZkSync team announced zkSync 2.0 and its zkPorter technology which aims to provide ~$0.01 transactions by moving transaction data off-chain (Validium style) and offering 20K transactions per second (TPS). It also boasts a sharded infrastructure design and arbitrary smart contract capabilities through the support of both Solidity (via zkEVM) and Zinc, the internal programming language of zkSync.

zkSync 2.0 is another L2 rollup that supports EVM programming languages like Solidity, Yul, and Vyper, and in the future Rust and Zinc (2022). This means developers can easily deploy EVM code (Ethereum L1) onto zkSync 2.0, and for users, zkSync 2.0 offers instant withdrawals and objective finality limited only by batch frequency.

zkPorter will be part of the ultimate zkSync 2.0 vision. With zkSync 2.0, the L2 state will be divided into two distinct options: a zk-Rollup with on-chain data availability, and the zkPorter option with off-chain data availability.

zkPorter is the internal consensus mechanism for data availability within zkSync 2.0, enabling the large TPS numbers. zkSync 2.0 can handle ~1,000 to 5,000 TPS as a standard ZKRU, but with zkPorter, it can accommodate ~20,000 to 100,000 TPS (depending on the complexity of each transaction). However, it should be noted that, when utilizing zkPorter, the user is relying on zkSync's internal consensus mechanism. This requires the user to place their trust in Matter Labs and rely on a far less secure or decentralized rollup solution that leverages L1’s consensus mechanism.

The good news is that users are able to choose either option based on their preferences and the trade-offs presented. Basically, each user is able to choose their own amount of security. zkPorter will offer negligible cost but lower security for trivial transactions and the ZK-rollup mode offers maximum security. Both parts will be composable and interoperable: contracts and accounts on the zk-Rollup side will be able to seamlessly interact with accounts on the zkPorter side and vice versa.

The primary difference between zkPorter and Starkware’s Volition is that a user must choose with each zkPorter account whether to produce transactions with off-chain data availability, while in Volition, a user can choose for each transaction within an account.

As of Q1 2022, zkSync has processed over 4 million transactions with transfers fees less than ~$1. Despite being fairly new, users began moving funds over to ZK-rollup projects like Loopring and zkSync in 2021, especially in Q4 as the chart below illustrates. By December 2021, unique users increased to ~185,000 while deposits eclipsed ~$115 million. For zkSync, the wave of adoption can be contributed to its top projects ZigZag Exchange and Gitcoin, a crowd-funding platform. According to L2fees, token swaps through ZigZag on zkSync have the lowest fees.

Source: Delphi Digital
Source: Delphi Digital

In October 2021, a closed testnet for zkSync 2.0 launched with Curve Finance as the only initial dApp being trialed. However, UniSync, a fork of Uniswap v2, is currently on a testnet to trial zero-knowledge, general-purpose, EMV-compatible functionality.

zkSync current centralization and control

zkSync remains dedicated to continuing its product development and hitting future milestones like:

  • V2: A big focus for zkSync is delivering its V2, however, no mainnet launch date has been set.
  • Exchange support: Expect easier onboarding once zkSync 2.0 has launched. OKEx and Huobi have signaled an intent to support direct deposits and withdrawals.
  • Decentralization: Currently there is strong centralization around the security council. Matter Labs has stated it intends to decentralize the security council eventually, perhaps with a future governance token (airdrop). zkSync is on record that there will be a token in the future and that ~67% of the supply will be distributed to the community.

However, it should be noted that the current state of zkSync is highly centralized. Although the zkSync multi-signers have q shared economic interests in the success of the project, contracts can be upgraded anytime via the 9/15 multi-sig. Matter Labs claims “the probability of bugs is significantly higher than a malicious collusion between the Matter Labs team and 9/15 members of the security council”.

https://collectiveshift.io/spotlight_report/zksync/
https://collectiveshift.io/spotlight_report/zksync/

Additionally, as of Q1 2022, zkSync operates several critically important pieces of infrastructure for the zkSync ecosystem. While the zkEVM is not currently publicly available, it is being worked on and managed by the team. Important elements under the team’s control include:

  • The Prover: generates validity proofs
  • Full Node: constructs blocks and runs the zkEVM via the virtual machine
  • Interactor: connects mainnet Ethereum and zkSync rollup, calculates transaction fees based on L1 gas costs

Roadmap and token
While there is a lot of promise and enthusiasm surrounding zkSync, plenty still remains ahead of the team. Per the zkSync roadmap, Matter Labs is working to decentralize zkSync 2.0 by implementing its own independent Proof of Stake (PoS) consensus mechanism. However, as a reminder, the overall security of zkSync will not be solely reliant on this new consensus mechanism since the final verification of state transition proofs is still done on the L1.

In order to release this new PoS system, Matter labs must introduce a new zkSync token and two new specialized roles: Validators and Guardians. Validators produce the blocks and generate the proofs while the Guardians’ role is to ensure the rollup remains censorship-resistant.

To do this, Guardians will maintain the state on zkPorter by confirming data availability of zkPorter accounts. If there is any failure of data availability, the Guardians will get slashed (economic penalty). Users in a Guardian-led system can always exit the system with their data, so long as at least 1/3 of participating validators remain honest.

One important feature of the zkSync PoS system is that, unlike in alt-L1s or sidechains, Guardians cannot steal funds, only freeze the zkPorter state. And in doing so, they freeze their own stake. Even if this were to occur, remember, due to the ZKRU design, users would still be able to withdraw their funds. Conversely, ORs that are successfully attacked, can lose user funds. This is one big advantage of the zkPorter system.

Pros

  • Less data contained in each transaction increases throughput and decreases fees
  • No withdrawal periods and faster finality 
  • Inherent (and cheap) privacy **
    **

Cons

  • Generalized smart contract support (similar to StarkNet) not live or production-ready
  • Initial trusted setup ceremony scares some, introduces trust
  • New, less battle-tested cryptography **
    **

Resources

zkSync applications

  • Argent is an Ethereum smart contract and social recovery wallet that allows users to designate “guardians,” other people that can help you update lost keys and ensure access to your wallet (should you need them). 
  • ZigZag Exchange is zkSync’s first decentralized exchange (DEX) and offers trades for less than $1.
  • Gitcoin Grants is an Ethereum-centric donation platform where the ETH community can donate and fund open-source Web3 projects. Now, users can donate with minimal fees atop zkSync.
  • LayerSwap is a bridge from centralized exchanges (CEXes) to zkSync.It supports ETH and USDT while bridging from Binance, Coinbase, FTX, Huobi, KuCoin, and OKEx.

Other ZK-Rollups

Polygon Hermez

Polygon Hermez is a ZK-rollup on Ethereum that is the product of Polygon acquiring Hermez and merging it into the Polygon ecosystem. Hermez is an open-source ZKRU designed for token transfers similar to zkSync v1. By utilizing zero-knowledge technology, Hermez claims it is able to increase throughput by ~130x (compared to Ethereum L1) while reducing token transfer costs by ~90%.

The Polygon Hermez protocol has an off-chain prover that validates transactions and generates a SNARK proof which gets submitted to the on-chain verifier, just like other ZKRUs. It is not yet EVM-compatible, differing from most solutions previously discussed. However, Polygon Hermez has announced its plans for full EVM-support (zkEVM) with a mainnet launch anticipated in Q2 2022.

Besides the Hermez project, Polygon also has a PoS Ethereum side chain (discussed earlier), a plasma chain, and are working on optimistic rollup solutions.

Aztec

Aztec is an early zero-knowledge rollup on Ethereum built with the focus to be fully privacy preserving. In fact, Aztec is a recursive ZKRU (a zk-zk-rollup) that released its private transfer protocol zk.money on mainnet in 2021, making private transactions 25x cheaper when compared to privacy mixers.

In December 2021, the Aztec team announced the testnet of Aztec Connect, a private bridge to Ethereum. Aztec Connect allows any Ethereum dApp to access strong privacy guarantees and ~50-100x cost savings.

L2 drawbacks

L2s are struggling to claim market share compared to alt-L1s, despite the migration of top DeFi dApps from mainnet Ethereum. Most expected L2s to immediately become a hot-spot for developers and users who were priced out of Ethereum mainnet. But to the detriment of Ethereum, other L1s, especially EVM-compatible chains in which users can easily bridge over their ETH, stole the limelight. Ecosystems like Polygon and Avalanche that dedicated a portion of their token treasuries to user incentives were key to making this happen.

While each L2 does improve performance, drawbacks to L2 solutions primarily include their further fragmentation of the network. With L2, there is no single, global state that supports composable smart contracts. Cross-L2 transfers currently are not seamless, and side chains require bridging, which means an overall lack of communication between various L2 projects.

Similar to competing L1 blockchains, rollups are not naturally composable with each other. Rollups break interoperability/composability, meaning there is no seamless, frictionless way for communicating messages across different L2s at the moment. Much of the critical infrastructure currently deployed in live rollups, like sequencers or the bridges, are centralized, black-box solutions. This means that without rollups communicating with one another, liquidity is siloed to that one rollup (unless it is bridged over). This leads to the fragmentation of liquidity, resulting in a worse user experience for all, i.e. shallow order books, increased slippage on trades, and fewer dApps available.

But there are many live interoperability solutions like Hop, Connext, Li.Finance, layerswap.i, cBridge, dAMM, and more already working to “bridge” liquidity and remedy this issue. In addition, projects are already working on internally-sharded ZK-rollups, a rollup within a rollup. These are mostly theoretical at the moment but could retain full synchronous composability and another ~100x improvement in TPS.

These solutions, known as “bridges,” are a system that transfers data between two or more blockchains or rollups. There are several components to most bridge designs:

  • Monitors: A validator, oracle, or relayer must monitor the state on the chain.
  • Relayer: A relayer needs to relay transaction data/messages from the main chain to the rollup.
  • Consensus: In some models, consensus is required between the actors monitoring the source chain in order to relay that information to the destination chain.
  • Signing: A participant needs to cryptographically sign the data sent to the destination chain.

Another key obstacle for L2 adoption is the UX and cost for users onboarding to an L2. The obvious solution is fiat and exchange onramps directly to an L2. As of Q1 2022, only a select few centralized exchanges support native withdrawals to L2s. This means a user must first deposit to the L1 and then bridge over to the L2. This is costly and adds friction to the user experience. A current workaround is to use an exchange to withdraw to a sidechain like Polygon PoS which has sufficient liquidity in cross-chain (centralized) bridges like Hop or Connext.

More on bridges coming soon!

Donations (much appreciated!)

Ethereum address (mainnet): 0x47904DD8aadb2Ec822bDbbe99D5E25077c8c85Bf

Loopring L2 (Counterfactual wallet) address: 0xe03f75859f21bcf048e5391649a5d8fc12825983

zk.money address: @guntag

Argent address: tomtomtom.argent.xyz

Disclaimer: Not financial advice. Informational purposes only. Opinions are my own. However, this piece was an attempt to aggregate and distill (nearly) the entirety of Ethereum’s scaling efforts, Merge, L2s, and rollups. No easy task. It was written on the shoulder’s of the entire industry. I did my best to give proper attribution but I would also like to thank the following:

 @Starknet_Intern

@starknetstatus 

 @Immutable

@zkLend 

and many, many more!

Subscribe to thatguy
Receive the latest updates directly to your inbox.
Verification
This entry has been permanently stored onchain and signed by its creator.