Mantle x EigenLayer — AMA Recap

We’ve been talking about decentralized data availability layers and the support we’ve received from our core technology partner, EigenLayer. But what exactly is data availability, and how are EigenLayer and Mantle both driving change in the Ethereum L2 landscape?

Find the answers in the replay of our highly-anticipated AMA, where EigenLayer’s Founder @SreeramKannan and Chief Strategy Officer @CJLIU49, as well as Mantle’s Product Lead @jacobc_eth and Product Manager @midroni dived deep into Mantle’s data availability layer, or catch the summarized bites below.

* Some sentences have been edited for clarity and brevity.

1. Tell us more about EigenLayer and what the team is working on.

C: EigenLayer is a restaking protocol. Ethereum, after the merge, works on a full proof-of-stake model, and there is a lot of ETH that is staked across a lot of validators that are securing the network — all the transactions and economic activity on top of it. The essential question is: Could you reuse or rent out the staked asset(s) or validator set to other networks, services, protocols and applications other than just Ethereum?

If you wanted to build certain types of blockchain-based software like oracles, bridges or data availability solutions, you might need high crypto-economic security to launch one of these systems. With EigenLayer, you can rent it from staked assets that are currently being used to secure Ethereum rather than try to bootstrap security from scratch yourself.

Many years of work went into Ethereum and getting people to trust the network. It's been around a long time, and it has accumulated a ton of trust and security.

Typically, a new network that needs a lot of security is likely to launch its own token and validator set to bootstrap that security base from the ground up. But with EigenLayer, you could let ETH stakers opt in to restake their staked ETH to not only secure the Ethereum network, but also secure whatever network, application or service that you are trying to build as well.

2. How are EigenLayer and Mantle going to work together?

J: EigenLayer has a product called EigenDA, which is a decentralized data availability layer that is secured by Ethereum. I was introduced to some of the talks that Sreeram was giving and was honestly deeply inspired. Sreeram and I talked about how EigenLayer could secure other features of Ethereum that were not built into the core protocol, but which could still have the same shared security and decentralization derived from Ethereum. One of the offerings that EigenLayer has been working on, EigenDA, helps us to dramatically reduce the gas fees of a Layer 2 (L2) beyond what an existing L2 is capable of.

Just to give some context into why this is the case, the vast majority of transaction fees incurred on a L2 goes towards paying for data on Layer 1 (L1). L1 was never meant to be used as a data protocol; it's good at a great many things, but data storage is not really what it was intended for. With EigenDA, we're able to break out the data portion of the transaction into a separate data availability protocol, and to allow those restaking incentives to ensure that the data protocol is still secured by Ethereum. Ultimately, our goal is to reduce the L2 transaction fees by about an additional 80% beyond what typical L2 transaction fees are.

3. What’s the relationship between EigenLayer and EigenDA?

S: EigenLayer is a general purpose platform that supplies the decentralized trust from Ethereum network stakers who opt in to any service that is built on top of it. This could be data availability services, MEV management, bridges, event-driven activation or even whole new chains. When we looked at what the most interesting service that can be built on Ethereum is and what is most needed today, we settled on data availability. What EigenDA does is it essentially takes the Ethereum validator set as a security source, and then creates a new kind of protocol, where whenever a user wants to commit data to Ethereum today, they write it into Ethereum. Rollups offload computation and just submit proofs. But for somebody else to verify that the proof was executed correctly, or to continue executing proofs afterwards, they need to have a snapshot of the inputs to that computation that was published to Ethereum. However, if he had a separate layer that is supported by Ethereum’s stakers, this proof of publication or data availability could happen at cheaper costs.

Ethereum itself has seen many breakthroughs that actually contribute to improving data availability. The most important one is called danksharding, which is still quite far down the road, and its intermediate version is called proto-danksharding. Both of these are built out of basic primitives like erasure codes and KZG polynomial commitments, which form the core foundations behind some ZK proofs.

EigenDA takes some of these cryptographic elements and builds a new module on top of EigenLayer for providing very cheap data availability. And when we talk about cheap data availability, instead of trying to evaluate the fees, at EigenLayer, we think that we should start evaluating the cost basis of providing data availability. The reason Ethereum’s data availability is expensive is because the total data bandwidth available on Ethereum is limited.

If we can expand the data bandwidth available on our data availability service, then we could significantly reduce the cost, especially if we could accomplish this without increasing the node requirements, and that's exactly what it does. Just to give a sense of orders of magnitude here, if you use the Ethereum blockchain purely as a data availability layer, you don't do any computation. You just write data to Ethereum, and Ethereum can support approximately 80 kilobytes per second a day. What we're shooting for in our first version is something around 10 megabytes per second. That’s a couple of orders of magnitudes worth of improvement over what Ethereum has today.

As we start getting more and more interesting use cases on EigenLayer like Mantle, and as demand on these services start going up, we want to address the bandwidth issue by increasing it proportionally to increasing demand. We talk a lot about block space in the crypto community, which basically means how much data or computation bandwidth is available. The model that we are building for EigenDA is that the capacity of this data cloud storage expands as demand increases, and the underlying technology and distributed systems are already available. When you’re out of space and want to store more data, the cloud stretches to accommodate demand. We just need better engineering for this to be feasible.

That's the core underlying promise of EigenDA, and the way it accomplishes what I mentioned previously is by not requiring every node in the network to download all units of data. Every node in the network downloads only a very small portion of the data, but together, these nodes have a complete overview of all the data. So that even if a lot of nodes go offline, you can still reconstruct a complete dataset. That's the core principle of EigenDA, and that's why we can reduce the cost of data availability significantly in comparison to Ethereum.

4. How would you compare it to other blockchain data availability services that are out there?

S: What modularizing the blockchain means is that there are going to be separate layers: for example, one that just provides data availability and another layer that covers computation or other functions. Some of the members from the Ethereum community have gone on to build other blockchains, one of them being Celestia, which is a data availability blockchain. Polygon has also built its own version of a data availability blockchain called Polygon Avail. And of course, I also talked about danksharding and proto-danksharding, which pioneered several basic primitives that are needed to build a scalable data availability system. What is the fundamental difference between these systems and EigenDA?

Firstly, EigenDA is supported by EigenLayer, which is then supported by Ethereum’s network. Celestia and Polygon had to create new networks to actually support data availability and validation after, and the reason they have to create a new network is because prior to EigenLayer, whenever you have a new idea for a system that provides a certain service, the only way you can do it is to create a whole new blockchain network. This means that you have a new set of validators, a new token that facilitates and drives validator participation. You have to have some kind of a consensus protocol, which then maintains this new blockchain, and finally data availability, which is then built on top.

The luxury we have building on EigenLayer is that a lot of those things have already been taken care of:

  • The validator network is not a new network but a subset of the Ethereum network, and Ethereum stakers can just opt in anytime

  • We’re an Ethereum-centered project, we don’t have to build a new consensus protocol or a new layer

  • We don’t have a separate ledger, we just write data availability commitments in Ethereum, and Ethereum itself orders the data

All we're building is one particular unit of a layer that is responsible for just data availability. The separate data availability engine only does this one thing, which is to receive units of data that are then aggregated and ordered on top of Ethereum. You have many degrees of freedom in building the system, and you can optimize the system for latency and throughput in ways that you would not be able to when the system also has to do other things, such as ordering the ledger and more. By adopting the separation principle, we can actually get a series of performance benefits, which is difficult to achieve in these other systems.

Another dimension of EigenDA that we've thought about a lot is pricing. Existing blockchain systems are fundamentally pricing only for congestion, which means, on Ethereum for example, if there is not enough transaction demand to fill up the block space, the pricing protocol will start pushing prices down until there is some demand to fill the block space. This is what we call congestion pricing. But in the cloud, there is no congestion — the idea of EigenDA is to price things so that you can actually reserve a certain amount of data bandwidth.

The very premise of it is that data bandwidth should be abundant. But if data bandwidth is abundant, there is no congestion. If there is no congestion, there is no price. And of course, a system doesn't work without price because stakers and validators are putting their capital and operational expertise into it, so they need to get paid.

The way we do it is by having a completely different method of pricing: reservation pricing. So you can reserve on EigenDA a certain amount of bandwidth over a specific period of time, say six months or one year. And during that period, that bandwidth is completely reserved for that particular rollup. It provides price certainty that is simply not available anywhere else, and you can then design the economics around this quite efficiently. What reservation pricing does is that it enables or empowers rollups like Mantle to create more innovative economic systems that empower their users with certainty in the prices over a specific number of TPS.

Moreover, there’s dual staking, where someone has to secure something on EigenLayer, and each staker is basically underwriting validation. The only way we can hold them accountable is if there is a provable on-chain slashing event. If there is another token community that wants to stake and participate in the same type of validation, for example, on EigenDA for providing data availability, there is another dimension where the role of a token is now both a staking and validation token. We're also getting security from Ethereum, so together, you get dual security. One group of stakers certify that the data is available, while another group, i.e. stakers specific to Mantle, also certify that the data is available, and only then will the smart contracts confirm that there is sufficient data availability. This is the design that makes EigenDA customizable.

5. How does Mantle's use of EigenLayer change the security model of a rollup? What are some of the implications if there are any, for the security, of using EigenDA with Mantle?

S: Rollups are basically modular chains that are built on top of EigenDA. So, depending on how many Ethereum stakers opt in or are involved, there is a kind of transfer of security, which is different from Ethereum. I would say this is the most important difference in terms of the fundamental security model.

You can also write to just a special token committee that is only dedicated to the rollup. In typical cases, the rollup only writes data to the data availability committee that is maintained by big stakers. Relative to that, the EigenLayer model is uniformly better in providing security, because we are able to get a portion of Ethereum stakers and validators to opt in. This requires nuanced technical discussions, but in general, I think these are distinct dimensions on which one can think about security being transferred from Ethereum to EigenDA and therefore, for the rollup, how much is being staked and how decentralization is transferred — how many different node operators or stakers are participating in it.

6. Why would stakers want to restake their tokens with EigenLayer?

C: Fundamentally, the most evident one is that if you are an ETH staker and earn yield, you could expect to receive a portion of the fees. In the EigenDA example where rollups have to pay restakers in EigenLayer via EigenDA, stakers can increase their yield. And what's really interesting is that EigenDA is just one service on EigenLayer, but over time, let’s say we have 100 different networks on top of EigenLayer. You get access to these different networks, and can make a choice about what you want to do with your staked ETH; as opposed to existing staking models where one obtains yield in one single way, you'll be able to stack yield in many different ways. So it's really about capital efficiency.

7. Could you tell us a bit more about some of those efficiencies? What specifically do you think might arise?

C: When I think about it from strictly a business angle, and just about the reason a lot of the most influential platforms in web2 on the internet have such big influence, it’s because they either control all of the supply or aggregate all supply and demand into a single platform. A lot of what's interesting about crypto is that you can build systems that perform, but are actually designed as public goods that are owned by many people instead of a single corporation. But there's still a lot of efficiency that you can derive as a platform from aggregating either all the supply or all the demand in the market. If one layer does a good job, it becomes the one place to get it. And if you’re a validator and you want to look for opportunities to stake and validate, you only need to go to one place to get it. This would make it more efficient for validators to discover opportunities or services, or to distribute tasks to validators.

8. You mentioned dual staking. What will happen on the $BIT side for Mantle to enable that to happen?

S: The core idea of dual staking is that there are two different token stakers — with Mantle, you’re looking at $BIT stakers. On EigenLayer, nodes restake ETH, and they're able to participate as a validator for the EigenDA process. In the same way, there'll be a special contract on which people can stake their $BIT, and when they do so, they are participating in the EigenDA protocol, and they communicate their interest to participate in the data availability for Mantle. Then they download and run the EigenDA software off-chain. Then, either the staker runs it or they ask a delegate or an operator to actually run it. What the operator does is they are basically downloading and running this version of EigenDA when they're downloading portions of data and submitting certificates. These certificates are then aggregated by the manual rollup sequencer, which will then aggregate all of these things and put a certificate onto Ethereum contracts. When EigenDA’s contracts verify that both the $BIT token nodes have received their portion of the data and the Ethereum stakers have received their portion of the data, we get an attestation on Ethereum that the activity happened. The rollup can then move ahead with this new state update.

9. Tell us a bit about some of the use cases that are going to be possible on Mantle with lower costs and this increased bandwidth for data availability?

J: We've been specifically trained to target use cases that weren't viable in traditional EVM networks. For example, with gaming, you can bring a lot more on-chain and have a lot lower transaction fees in the paradigm that we're working on together. It's also going to be true with social graphs and more. We have some specific features that leverage the Mantle chain that are going to do some pretty outstanding things with NFTs that are really different from what anybody has done historically, and probably would not be economically viable in the existing paradigm. The gaming and social graph stuff is really just the beginning though. There are all kinds of things that we want to enable and with Mantle; it's a pretty high priority to really look at some of the best EIPs that haven't been adopted yet at the L1 level, and to be able to adopt some of these EIPs earlier, so that we can enable better use cases. When we combine those with EigenDA or other forms of extensibility we want to be a bleeding-edge EVM chain and L2 for Ethereum. That really greatly improves the user experience and the capabilities of what blockchains are capable of doing. 

10. Could you share a bit about some of the dApps that you're excited about, and what you think that'll do for builders or users working on Mantle?

J: We actually ported the EIP-3074 to an optimistic rollup framework for anybody that's not aware. EIP-3074 is an EIP that manages transaction and contract capabilities in general. This is even before account abstraction. Existing wallets can get access to meta transactions and have their gas paid without having to migrate to new accounts. There are multiple existing use cases that currently require contract accounts or people to migrate to account extraction. EIP-3074 can enable a lot of those capabilities right away, so we've been interested in implementing it on Mantle, and we've also been interested in adopting EIP-4337, which enables true long-term account abstraction, which is going to take a longer time to migrate existing users to, but it is insanely powerful and a massive user experience accomplishment. I think that we're positioned to really be on the bleeding edge of innovation. 

Join the Mantle community!


Subscribe to Mantle Network
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
This entry has been permanently stored onchain and signed by its creator.