Rollups Are the Final Frontier of Scaling

Introduction

The fundamental limits of L1 and L2 scaling are different as well as their respective design spaces. Until very recently, most teams have spent their time primarily focused on pushing the boundaries of L1 scaling. Now, teams like MegaEth are exploring the boundaries of what’s possible past L1 scaling boundaries. Rollups are the final frontier of blockchain scaling. They combine the performance of traditional servers while inheriting blockchain properties from their base layer.

The Final Frontier

Ethereum launched in 2015 under the codename “Frontier”. Since then, many L1s launched to explore the boundaries of what’s possible given a global consensus network. This L1 phase of exploration is depicted below where many projects like Solana and now projects like Monad are pushing up against L1 scaling limits.

Rollups rely on the L1 consensus for blockchain properties such as censorship resistance, finality and safety. This means that rollup builders can make optimizations that don’t make sense at the L1 level. They are not beholden to fundamental L1 scaling constraints which we’ll cover in the next section. This L2 phase of exploration is depicted below where projects like MegaEth are discovering the boundaries of L2 scaling limits. The line representing L2 scaling limits is dotted to show that its actively being explored.

L1 vs L2 Scalability Bottlenecks

There are inherently different bottlenecks at the L1 versus L2 level. L1 is limited by consensus and network level bottlenecks while L2s are limited by hardware level bottlenecks. Let’s visit each bottleneck and see what the limitations are in detail.

At the consensus level, the overhead is a combination of the rounds of communication required and the communication overhead. Many consensus algorithms require multiple rounds of communication and transaction data and consensus messages must be propagated to peers. More efficient consensus algorithms can raise the bottleneck closer to but not past the network level bottleneck.

At the network level, the constraint is how much network bandwidth a machine has access to. On home internet connections, 1 Gbps network links is on the high end with real network bandwidth normally lower than the advertised throughput. Cloud providers like AWS have higher bandwidth network links available from 10 Gbps to 100s of Gbps.

AWS manages their “Backbone” network to provide high network bandwidth across their data centers while third party brokers offer solutions between different cloud providers. High bandwidth requirements becomes a centralizing force as node operators are forced to deploy nodes to specific cloud providers and data centers to ensure sufficient network throughput and latency.

At the hardware level, the constraint is how much raw compute, memory and disk I/O the machine has. For reference, an EC2 instance in AWS can have up to 24 TB of memory, 512 virtual cpus and 120 TB of disk with up to 3 million input/output operations per second. High throughput storage can achieve up to 10 - 50 Gbps in reads and writes.

As shown in the diagram below, L1 scaling range is capped at the network level while L2 scaling range is capped at the hardware level. It’s expected that consensus algorithms, network bandwidth and hardware should improve over time but the L2 scaling range will always sit above the L1 range.

Counter Arguments

It’s technically possible for an L1 to push its network and consensus bottleneck levels up closer to the hardware level. This L1 could run three nodes in a single data center co-located in the same server blade or rack. However, this means that decentralization is sacrificed for the sake of performance. This setup is closer to a single traditional database with two read replicas than a blockchain.

Any small changes to decentralize the setup dramatically bring down the scalability ceiling of the L1. For example, packet round trip times between North America and Asia are anywhere between 120 - 300 ms. Splitting up the three nodes across geographies instantly moves the network level bottleneck down in terms of scalability.

Unlike an L1, an L2 does not have to bootstrap their own consensus or trust layer and can instead rely on an L1 that is sufficiently decentralized and trusted by market participants. L2 assets issued on the L1 are secure while computation over those assets can happen at hardware level speeds assuming the existence of a native bridge with fraud or validity proofs.

Impact on Interoperability

On the final frontier, rollups will have insanely fast block times and high throughput. What does this type of scale mean for interoperability?

  • L1 hub and spoke protocols do not work.

    • Scalability of an L1 hub is capped in the L1 scaling range.
  • Point to point protocols work for a smaller number of rollups but scale inefficiently in the number of rollups.

    • N^2 connections are created and maintained for N chains.

    • Adding a interop features such as Timeout increases total costs on the order of N.

    • Verifying safety properties such as validity, DA and ordering increases total costs on the order of N^2.

The most efficient and scalable model for rollup interoperability is a rollup hub and spoke protocol. Polymer is building an ecosystem of purpose built rollups starting with Polymer hub - a rollup purpose built for interoperability. The final frontier works in real time and real time blockchains need real time interoperability.

Subscribe to BroBoBo Bo BoBo
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.