EigenDA - An Introduction

EigenDA

Scaling has been a central theme of blockchains and smart contract platforms for almost a decade now. More recently, we have seen data availability solutions come to market in order to increase the throughput of blockchains, such as Celestia. More recently, we have announced that we are taking a step further into the data availability sector by participating in Eigenlayer’s data availability solution, EigenDA. Let’s first take a look at what data availability is and how it’s helping blockchains scale

Scalability

The base layer of Ethereum prioritises decentralisation by keeping node hardware low, thus allowing for high participation. Each validator running the execution client and the consensus client must download and re-execute all Ethereum blocks, and all the transactions inside of them to verify they are valid. After downloading all previous blocks, they are then required to do the same for newly created blocks, every ~12 seconds. On the base layer, all data is made ‘available’ as every validator has to download each block and verify the contents of it. This task is no small feat, especially on the low hardware specifications that Ethereum validators are running on. The Ethereum base layer, by prioritising decentralisation and security, sacrifices scalability and therefore only processes around ~14 transactions per second (TPS).

Keeping the base layer maximally decentralised, Ethereum is now embarking on a “rollup-centric” roadmap, whereby rollups will act as the scaling solution to enable cheap and fast transactions. Rollups are designed to process transactions off the main blockchain (layer 1) before finalizing them back on it, i.e. they take on the execution, while Ethereum handles consensus, settlement and data availability. This effectively expands Ethereum's processing capacity by shifting transaction handling (i.e. execution) to a secondary offchain virtual machine, while still maintaining the robust security guarantees of the Ethereum network.

But how can we be sure that the transactions occurring on these Optimistic rollups and ZK-rollups are valid? Data availability & cryptographic proofs. Let’s walk through it:

  1. User transacts on the rollup and this changes the state of the rollup (asset holdings and accounts of each address).

  2. These transactions are sent directly to the rollups block builder, i.e. the sequencer for Arbitrum and Optimism.

  3. The sequencer orders the transactions (usually first come first serve) in a block

  4. The sequencer (or proposer in the case of ZK) then sends all this data to the Ethereum / base chain via call data

  5. This means all the data has been made available by posting it back to Ethereum

  6. In the case of Optimistic rollups, fraud proofs can be generated to prove the validity / invalidity of these transactions. With Zk rollups, a zero-knowledge proof is submitted to prove this validity.

The issue here is that posting data back to Ethereum via calldata is very expensive, as the calldata space is limited each block. Let’s walk through it:

Ethereum has a maximum gas limit on each block, which is 15 million Gas units. One byte of Ethereum call data costs 16 Gas units, meaning if rollups filled up 50% of a block (7.5M Gas), they would have 468,750 bytes of data per block (7.5M Gas / 16 Gas) to make transaction data available. Taking zkSync’s average transaction size of 177 bytes, this would mean 2,648 zkSync transactions could be included per block (468,750 bytes per block /177 bytes per transaction). With the Ethereum block times of 12 seconds, this would be 220 TPS. Not exactly equipped for mass adoption, or even the current number of market participants. This example also assumes that zkSync (or all L2s) take up 50% of each Ethereum block, which would of course be extremely expensive, and this cost would flow back to L2 users via higher L2 fees (90% of L2s fee are the costs to post data to Ethereum).

Ethereum’s ultimate goal to address this scalability issue is Danksharding, however, Danksharding is at least 2 years away from going live. In the meantime, via the Dencun upgrade, we will have Proto-danksharding which will make rollups approximately 90% cheaper with similar data throughput issues. Third party data availability solutions have come to market in order to address this scalability issue. The most high profile project is Celestia, which boasts 8mb/s throughput per block and scales with the number of light nodes on the network. However, other 3rd party DA layers such as Near, Avail and now EigenDA have started to offer their services.

EigenDA

EigenDA is a data availability service and a significant addition to the EigenLayer ecosystem. As the inaugural actively validated service (AVS) on the restaking protocol, it introduces a novel approach to data handling in the blockchain space. With EigenDA, rollups can post data on EigenDA and use it as a data availability layer, while leaving consensus and settlement to be handled by Ethereum, and execution by the rollup, effectively keeping similar trust assumptions, while cutting costs significantly.

Through EigenDA, restakers on EigenLayer can allocate their stakes to node operators who validate data for EigenDA. Rollups leverage EigenDA by submitting data blobs, including sequencing or state data (the same data that would be sent to Ethereum calldata), to a component known as the Disperser. This Disperser breaks the data into smaller segments, which are then distributed to EigenDA nodes for storage.

A distinct advantage of EigenDA lies in its horizontal scaling capability. Via Erasure coding and Data Availability Sampling (DAS), nodes are required to download only a fraction of the data (1/n, where n is the total number of nodes), leading to less congestion and higher throughput as more nodes join the network. EigenDA's strategy of distributing small data segments to full nodes, instead of relying on light clients for data availability, is primarily due to its dual role where nodes also function as Ethereum validators. This dual responsibility necessitates a lightweight approach, hence the distribution of only portions of data to node operators.

Unlike Celestia, EigenDA doesn't depend on fraud proofs, which often require full nodes to download entire blocks for verification. Instead, it uses KZG commitments and proofs. These are prepared by the EigenDA Disperser and allow nodes to verify the correct encoding of a block without needing to access the full node. This method, akin to Ethereum's Danksharding, eliminates the necessity for an honest minority, a requirement in systems like Celestia that use fraud proofs.

EigenDA also uses a dual quorum feature. This requires two separate quorums for data availability attestation, enabling involvement from both rollup's native token stakers and ETH restakers. In line with Ethereum’s approach, EigenDA employs a proof of custody mechanism, ensuring that nodes responsibly store data for the designated period and fulfill their duties, with penalties for non-compliance.

EigenDA Mechanics

  1. In the initial stage, the Sequencer of the rollup compiles a block of transactions and initiates a request to distribute the data blob.

  2. Next, the Disperser takes charge. Its role includes breaking down the data blobs into smaller pieces through erasure encoding, creating both a KZG commitment and multiple KZG reveal proofs, and then dispatching these chunks along with their commitments and proofs to the operator nodes within the EigenDA network.

  3. Rollups have the option to either manage their own Disperser or utilize a dispersal service provided by a third party, like EigenLabs, for enhanced efficiency and reduced signature verification costs. They can also choose to use a third-party service in a way that ensures backup options (their own Disperser) are available in case the third-party service fails or engages in censoring, thus maintaining an optimal balance between cost efficiency and resistance to censorship.

  4. Finally, the EigenDA nodes play their part by validating the chunks against the KZG commitment with the help of the multi-reveal proofs. After ensuring the data’s integrity, they store it and send a verification signature back to the Disperser, completing the process.

Subscribe to ValiDAO
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.