Reimagine Ethereum: Dencun upgrade brings about L2's new choices

Author: Trustless Labs

On 18th January 2024, Trustless Labs organized an AMA themed "With the Dencun upgrade, what is the choice of Ethereum L2s" with ZKFair, EthStorage and Taiko. Speakers discussed three solutions to "Data Availability" after the Dencun upgrade, as well as differences between deployed projects and new projects, potential challenges of different projects and much more.

X Spacehttps://twitter.com/i/spaces/1vAxRvmgwBjxl

Host: Frank Bruno@Co-Founder of Bitboost

Speakers: Crypto White @Founder of Trustless Labs、Ark@Core Contributor of ZKFair、Zhou Qi @Founder of EthStorage、Dave@Head of Devrel of Taiko、Vincent @APAC DevRel of Scroll、David@VP of Polygon

Media Partners: Foresight News、TechFlow、Odaily、BlockBeats

There were over 500 active listeners and a total of 7200 listeners turned in the AMA. We would like to express massive thanks to all of you who joined the AMA. This blog posts a recap of the entire session, capturing insightful thoughts in case you missed it.

Introduction of Projects

Trustless Labs is a cutting-edge research and technology-driven incubator committed to advancing technology innovation. With a track record of supporting groundbreaking projects, Trustless Labs aims to be a catalyst for positive innovation in the blockchain and technology sectors, standing alongside innovators to drive progress in these fields. Trustless labs launched the first phase of the $10 million Bitcoin ecosystem fund to incubate Bitcoin ecosystem projects.(Twitter: @TrustlessLabs)

ZKFair is the first community ZK-L2 based on Polygon CDK and Celestia DA, powered by Lumoz, a ZK-RaaS provider. ZKFair utilizes stablecoin USDC as the gas token. ZKFair ensures 100% EVM compatibility, exceptional performance, minimal fees, and robust security.(Twitter:@ZKFCommunity)

Taiko is building a decentralized Ethereum-equivalent (Type-1) ZK-EVM and a general-purpose based ZK-Rollup to scale Ethereum in a manner that emulates it as closely as possible — both technologically and ideologically. Given Taiko is Ethereum-equivalent, all existing Ethereum tooling will work out-of-the-box and any additional audits or code changes will become redundant — meaning less overheads for developers.(Twitter: @taikoxyz)

EthStorage is a storage Rollup built on top of Ethereum. It provides a programmable dynamic key-value store based on Ethereum's data availability, in particular EIP-4844 and Danksharding. EthStorage recently received two grants from the Ethereum Foundation Ecosystem Program for data availability sampling research and layer 2 research.(Twitter: @EthStorage)

Q1 The Dencun upgrade brings three options for the Data Availability(DA) scheme. Which solution do the guests think should be chosen? The first one is an on-chain storage solution, such as Calldata, characterized by being real-time, secure and high-cost. The second one is an off-chain storage solution, such as Ethstorage, EigenLayer and Celestia. The third one is the post-upgrade Proto-Danksharding of Dencun, with a default data cache of 30 days, but allowing a few nodes to choose to retain records.

Crypto White @Founder of Trustless Labs

I think Rollup projects have a very close relationships with the Asian foundation. They will definitely transfer their DA solution from the traditional on-chain solution called Calldata to the Proto-Danksharding. A good example is Optimism. They are very close to the Ethereum foundation, although such a transfer will bring about a lot of development work to support Proto-Danksharding. The benefit is also obvious. It will greatly reduce the gas costs of layer 2 because Proto-Danksharding is much cheaper than Calldata. For some new Rollup projects, the better choice is to use off-chain solutions, such as EthStorage, Celestia and EigenLayer. A new project that also follows the concept of Proto-Danksharding may find it challenging to establish distinctiveness and innovation compared to existing projects, making competition with them difficult. The market already offers various off-chain data availability solutions, each with its unique features. The combination with this DA solution will lead to new Rollup solutions in the future. In the case of new Rollup projects, I believe that an off-chain solution is a more viable option.

Dave @Head of Devrel of Taiko

We are building like a Rollup on Ethereum. I think that it is a little bit semantic, but basically not relying on external DA. However, I think that they are all good paths to explore. Anytime we can push more decentralized systems, we can use them to replace parts of the current internet that we use. What is important for someone choosing to use one of these solutions is developers, not a user consuming these products. A lot of use cases can lower security guarantees and still be acceptable. I think all alternative layers are great for that. They do not replace what Rollup does though. There are use cases for strict Rollups which allow you to run a full node, exit, sync the state generator merkle proof, and exit the Rollup all on your own relying on the base layer. It depends on what the developer is trying to use. So, I would not rule any of them. I think there are probably some really cool applications we can build that post to alternative DA solutions like EthStorage or Celestia.

Zhou Qi @Founder of EthStorage

I think Proto-Danksharding is friendly to reduce the cost of uploading data to Ethereum for existing big projects like Optimism, which claim they can derive Ethereum security. For example, Optimism has an upgrade plan, which already incorporates the EIP-4844 feature. This feature introduces a new data object called the binary large object, and its implementation is available in their GitHub repository. A couple of things I would like to remind you that Proto-Danksharding is supposed to reduce the cost compared to Calldata. Even though there are different ways to calculate the cost, the underlying technology to upload the data depends is still using P2P broadcasting, or something we call gossip. The fundamental bandwidth that can be uploaded to the Ethereum network is still limited by the current P2P network. This means that, suppose there are a lot of projects that choose this way to upload the data on top of Ethereum. It may still be contrasted because right now the bandwidth is actually the same as Calldata. So, it is possible that when the data market is full of participants, and the demand is so strong. Its advantage may gradually diminish and then sometimes maybe even equal to Calldata. This is something we would like to closely watch because this is kind of a new experiment on top of Ethereum, introducing a new data market in the Ethereum protocol. How it will work and how much its cost can be reduced are open questions. I am excited to see how it goes. Other DA solutions, such as Celestia, can already offer significantly lower costs. This might be an attractive option for projects requiring high bandwidth uploads, such as artificial intelligence projects uploading gigabytes of AI models. They are super expensive. The choice depends on different markets and application requirements, which we anticipate will be addressed in the future. Full DA upgrades are expected to offer a fixed seamlessly 32 megabytes per block, translating to about two megabytes per second, bringing substantial improvements to underlying technology and performance.

The second issue of Proto-Danksharding is that it will remove the data in a couple of days. For example, 18 days or sometimes even more aggressively. Eighteen days in the current spec means that unlike the current usage of the Calldata, like Arbitrum and Optimism claim they can fully derive all layer 2 state from layer 1. After discarding all data in eighteen days means that Optimism and all these projects will not be able to derive the data from the genesis of layer 2 from layer 1. They must rely on some third party either to retrieve the historical blocks or to make a snapshot of recent state. This is something that we observe and try to solve by our research. The basic idea is how to build a modular storage layer on top of Ethereum so that we can store those blocks for a much longer time; it can be a couple of months, years, or even much longer. So, in that case, we are helping all those layer 2 projects to be able to further reuse if they are secure. This will enable projects to derive their layer 2 state, even though their nodes may discard the data that is already written in the protocol. In conclusion, the emergence of new technologies offering diverse data options is exciting. There are still a lot of challenges in terms of methods, including the cost of storing historical data, how they can verify those storage off-chain, and how we are able to distribute corresponding incentives to those storage providers. In the short term, there will be numerous excellent projects to choose from based on their needs.

Ark @Core Contributor of ZKFair

I think it is kind of a trade-off between truth and cost. Calldata is the most trustworthy one we have, while it is the most expensive one. If we move to EIP-4844, it can reduce lots of costs, but it is not that trustworthy because the data is not fully persistent. Plus, we can move to the third party DA providers. They offer a trade-off between trustworthiness and cost, depending on the type of projects.The choice among these options depends on the specific use case. For emerging or startup projects, cost reduction might be a priority, as their primary concern is survival rather than establishing immediate community trust. I will not discuss the machine cost or the labor expense but only focus on gas costs. Several thousand dollars per day might be expensive for some startup projects. In the initial stage, there will not be many users playing with the chain. They can choose off-chain solutions like Celestia or the CDK.As the community grows, there may be an opportunity to enhance the trustworthiness of the DA. This would be the right time to migrate from off-chain data to more reliable solutions. Of course, this transition may involve some technical challenges, such as understanding how to use these solutions and integrating their workflows effectively.

Q2 If choosing to migrate data, what challenges will offline storage and the previous on-chain storage face respectively? How long will the migration take, and how should it be planned?

Crypto White @Founder of Trustless Labs

It is not easy for ZK-Rollup to support Proto-Danksharding, because Proto-Danksharding use KZG commitments, which is not very compatible with some ZK-Proof algorithms, such as ZK-Stack. I think it is fully compatible with the Plonk algorithm, but for others, the KZG commitment may not be friendly to the ZKP algorithm. So, there may be some ZK-Rollup projects that may not have a strong motivation to use Proto-Danksharding because it will cost a lot of development to support them. For Optimistic ZK-Rollup projects, I think they may make some minor changes to their codes to support Proto-Danksharding. And it is not very easy for them to transfer their DA solution from the previous Calldata to Proto-Danksharding. So, it will not be very hard for them to migrate their DA from the core data to Proto-Danksharding.

Dave @Head of Devrel of Taiko

For us, we do not plan to migrate. We do plan on just shipping with EIP-4444 enabled by default on our mainnet. So, once we are able to test, we should be good to go. I do think that some other interesting to-do items were kind of mentioned by Qi as well. We need to find a solution to store at least the latest block data somewhere so that users can still compute the state back to genesis. That is still a pending item that we are working on. Some of that block data, especially the most recent one, in the longer term, I think that it should be a pretty smooth deployment and utilization of EIP-4444.

Zhou Qi @Founder of EthStorage

From my observation of the progress Optimism Rollup, for example, Optimism is running ahead of the integration of EIP-4444. Many cost commitments are more friendly to Optimistic fraudproof algorithms to support the KZG commitments. For us, we support EIP-4844 at the beginning. Initially, we were using Calldata, but it proved to be extremely expensive. Therefore, we decided to support EIP-4844 from the start, as it offers a more cost-effective solution. This approach was implemented when we launched our internal network and contributed to the EIP-4844 testnet. Currently, we are in the process of stress testing data, and there are also some minor issues that need to be addressed. For example, supporting the new opcode in solidity where we found that there is no such a support at the moment, but we all have already released our library. Even though it is not perfect, it would be great for any layer 2. If you would like to test EIP-4844 in an early stage, you will need to be able to retrieve the block hash and be able to verify in an optimistic way or use ZK validity. We have something that can help. The basic usage of EIP-4844 right now is quite stable. So I am expecting there may be more commitments, especially from layer 2 or any other ideas. That is towards the EIP-4844, the new data features that are in the top Ethereum.

Ark @Core Contributor of ZKFair

I think the challenge developers might face is to get familiar with the bottom layer codebase and the new workflow of the DA solution. It has some challenges during migration. Because we need to submit the historical data to the new DA layer and also we need to guarantee the data is stable and correct. We need to study which kind of code logic should be modified, which part should be protected, so that the data will not get corrupted. I guess it is not that difficult in structure or design, but it is difficult to keep everything correct and safe. Maybe we need to take several months to migrate. It might be more difficult to migrate from off-chain to EIP-4844. Because it is kind of new tech and everyone is not very familiar with it.

Q3 How to address the definition of layer 2? From the official Ethereum perspective, projects which choose Proto-Danksharding or migrate from off-chain to Proto-Danksharding are layer 2 projects, while being on Celestia do not. How should projects perceive and choose?

Dave @Head of Devrel of Taiko

I think there are some terms to describe all these different solutions like Rollups of Validium, Optimism, Celestia, ect. layer 2 can be a good term for that, but I think it is still going to be important to distinguish that if you use an alternative layer, that is not really considered a Rollup. So I think it is to keep the definitions separate, whether we classify Validium to layer 2 or not. I think it is fine because all can distinguish between Rollups.

Ark @Core Contributor of ZKFair

I think Ethereum has given some kind of definition for a project that uses off-chain data, so they call it Validium. I think it can be a kind of layer 2 project, but it may be less trustful. It depends on the community and users, whether they think it is trustful or not. For startup projects, it is okay for them to choose off-chain storage, and it is okay for us to call them layer 2 projects, but the Ethereum team may think they are Validium projects.

Zhou Qi @Founder of EthStorage

To answer this question, we need to understand what the definition of layer 2. For example, the core of being a Rollup is an unconditional security guarantee, which means that it only relies on layer 1 and then we can secure layer 2. I think there is still a lot of things that even with the current layer 2 definition, there are still some missing pieces. Data is being discarded in a few days. Even right now, the Calldata will also be planned to be discarded in about one year. Basically, layer 2 will lose the ability to derive the state from layer 1, including Optimism. In that case, we still call it layer 2. Using the strict sense that layer 2 should derive security from layer 1. We think they are able to derive most security from Ethereum. In a more general sense, we can also treat it as layer 2. That means the unconditional terms in Vitalik's word will have assumptions there, which will introduce additional trustworthiness. Some places store the historical data of historic blocks, or there is an assumption that layer 2 will maintain some of the recent state, so they can use the recent state and recent blobs that are still stored on the network. For example, it is possible to recover the latest state within one week, which greatly enhances the system's security. Additionally, this capability is much more effective than combining various proofs. Specifically, when it comes to ensuring the correctness of layer 2 computations, this method is advantageous. However, it's important to note that, according to the current definition of layer 2, this approach does introduce a very small amount of security or trust risk.

So, I would prefer a more general definition of different layer 2, even including the Validium using the third party. Let me pose a question: What if, in an extreme case, a post-4844-layer 2 solution becomes so large and significant that the security requirements imposed on it approach those of Ethereum itself? So, in that sense, there is no difference in a security level that Celestia can offer. I hope in general the definition of layer 2 should be more open, there should be different options of layer 2, and identify what kind of DA solutions they are using. As long as projects can serve a large number of users and fully utilize technology to secure their data, they should be able to demonstrate and verify all security aspects on-chain. This is something that is really amazed by its future layer 2.

Author: Trustless Labs

Subscribe to TrustlessLabs
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.