Does MegaETH work?

Introducing MegaETH

* Thank you to @0xsmac, @whoiskevin, and @0xgodking for the feedback. *

I’ve been scrolling twitter too much, falling back into a bad habit of bookmarking everything, opening up 150 tabs, and telling myself I’ll read these things I’m saving to my computer. Of course it never happens.

I figured a better use of my time would be to write about them on Substack and force myself to get some sort of writing done as a warm-up, as these usually help me get in the mindset to finish longer reports for work. I don’t have any particular end goal for these, and I would also like to mention that I was not incentivized in any way to write about MegaETH. I’m doing this entirely for fun, so please enjoy my unbiased writing as I attempt to make sense of this project.

As I’m writing this, there is absolutely zero idea in my head of what I should write about, except for maybe MegaETH. Will I write about other projects? I’m not sure.

The plan was to write about MegaETH in a previous post, but I gave up because I was being lazy, deciding my time was better spent elsewhere until they got a website. But it’s a new day and MegaETH officially has a website, so I guess it’s time for me to look at what they’re doing in more detail.

This will mostly function as my review / personal thoughts on the MegaETH whitepaper and the problem areas it highlights. Regardless of what this writing turns into, hopefully you learn something new today.

via MegaETH website
via MegaETH website

MegaETH’s website is cool because it has this mechanoid bunny on it and the color scheme is easy on the eyes. Before this, there was only a Github - a website makes things much simpler.

I’d looked through the MegaETH Github and understood they were working on some type of execution layer, but I’ll be honest and say that maybe this assumption was wrong. The truth is that I haven’t looked into MegaETH as much as I should have and now I head they were the talk of the town at EthCC.

I need to stay on top of things and make sure I’m looking at the same tech as the cool kids.

The nitty gritty

The MegaETH whitepaper says they’re an EVM-compatible, real-time blockchain built to bring web2-like performance to crypto. Their purpose is to elevate the experience of using an Ethereum L2 by offering stat boosts like over one hundred thousand transactions per second, less than one millisecond block times, and one cent transaction fees.

Their whitepaper highlights the growing number of L2s (discussed in a previous post of mine, though the number has climbed to well over 50 with many more in “active development”) and their lack of PMF in the crypto world. Ethereum and Solana are the most popular blockchains, with users gravitating to one or the other and only choosing other L2s if there’s a token to farm.

I don’t think too many L2s is a bad thing, and even if I don’t think it’s necessarily a good thing, I do agree that we need to take a step back and evaluate why we as an industry are creating so many of these.

Occam’s razor would say that VCs enjoy the feeling of knowing there’s a real possibility they can kingmake the next L2 (or L1) and get a rush from investing in many of these projects, but there’s a small part of me that believes many of crypto’s developers actually want more L2s. Both sides might be right, though it isn’t important to come to a conclusion on which side is more correct; it’s best to look at the current infra ecosystem objectively and work with what we have.

via MegaETH whitepaper
via MegaETH whitepaper

The L2s currently available to us are performant, but not performant enough. MegaETH’s whitepaper says that even with opBNB’s (comparatively) high 100 Mgas/s, this only translates to something like 650 Uniswap swaps per second - modern or web2 infra is capable of one million transactions per second.

This isn’t old news. We knew that despite its strengths that come from decentralization and enabling permissionless payments, crypto is still quite slow. If a game development studio like Blizzard wanted to put Overwatch on-chain they couldn’t do it - we need higher tick rates to offer real-time PvP and other features that are offered in web2 games without a second thought.

One of MegaETH’s solutions to the L2 conundrum comes from delegating security and censorship resistance to Ethereum and EigenDA respectively, transforming MegaETH into the world’s most performant L2 without any of the tradeoffs.

L1s typically need homogenous nodes, these being nodes that perform the same tasks without room for specialization. In this case, specialization refers to jobs like sequencing or proving. L2s get around this and allow for heterogenous nodes, with tasks separated out to improve scalability or offload some of these burdens. This is seen through the growing popularity of shared sequencers (like Astria or Espresso) and the rise of specialized zk proving services (like Succinct or Axiom).

“Creating a real-time blockchain involves much more than taking an off-the-shelf Ethereum execution client and ramping up the hardware of the sequencers. For instance, our performance experiments show that even with a powerful server equipped with 512GB RAM, Reth can only achieve about 1000 TPS, which translates to roughly 100 MGas/s, in a live sync setup on recent Ethereum blocks.”

MegaETH extends this compartmentalization by abstracting away transaction execution from full nodes, using only one “active” sequencer to remove consensus overhead in typical transaction execution. “Most full nodes receive state diffs from this sequencer via a p2p network and apply the diffs directly to update the local states. Notably, they don't re-execute the transactions; instead, they validate blocks indirectly using proofs provided by the provers.”

I haven’t read much analysis on how MegaETH stands out other than comments that “it’s fast” or “it’s cheap”, so I’ll try and break down its architecture and compare it to another L2.

MegaETH uses EigenDA to handle data availability, which is pretty standard these days. Rollup-as-a-Service (RaaS) platforms like Conduit let you choose Celestia, EigenDA, or even Ethereum (if you want) for your rollup’s data availability provider. The differences between the two are pretty technical and not entirely relevant and it seems that the decision to choose one over the other is based more on vibes than anything else.

The sequencer orders and eventually executes transactions, but is also responsible for publishing blocks, witnesses, and state diffs. A witness in the context of an L2 is a piece of additional data used by provers to validate a sequencer’s blocks.

State diffs are changes to a blockchain’s state, which could be basically anything that happens on-chain - a blockchain’s function is to constantly append and verify new information added to its state, these state diffs function as something that allows full nodes to confirm transactions without re-executing them.

Provers consist of special hardware that compute cryptographic proofs to validate a block’s contents. They also allow for nodes to avoid re-execution. There are zero-knowledge proofs and fraud proofs (or is it optimistic?) but the distinction doesn’t matter right now.

Putting all of this together is the task of the full node network, which acts as a type of aggregator between provers, sequencers, and EigenDA to (hopefully) make the MegaETH magic a reality.

via MegaETH whitepaper
via MegaETH whitepaper

MegaETH’s design was built around a fundamental misconception about the EVM. Even though L2s often blame the EVM for their poor performance (throughput), it’s been found that revm can achieve 14,000 TPS. If it isn’t the EVM then what’s causing this?

Present issues with scalability

Three of the major EVM inefficiencies leading to performance bottlenecks are a lack of parallel execution, interpreter overhead, and high state access latency.

MegaETH is capable of storing an entire blockchain’s state due to an abundance of RAM, with the exact number for Ethereum being 100GB. “This setup significantly accelerates state access by eliminating SSD read latency. For example, in our historical sync experiment above, the 𝑠𝑙𝑜𝑎𝑑 operation accounts for only 8.8% of the runtime.”

I don’t know much about SSD read latency, but the general idea seems to be that certain opcodes are more intensive than others and this can be abstracted away if you throw more RAM at the problem. Does this work at scale? I’m not sure, but I’ll take it as fact for the purpose of this post. It reminds me of the debate on whether or not you can scale LLMs to superintelligence by simply tossing more compute and more dollars at the problem until something sticks. Is that all it takes? In every industry, is more compute or RAM the bottleneck? I’m not sure.

I’m still skeptical that a chain can nail throughput, transaction cost, and latency simultaneously, but I’m trying to be an active learner on this.

Another thing I should mention is I don’t want to be overly critical in these shorter substack posts. The idea is never to endorse one protocol more than the other or even highlight them in the first place - I only do these so I can get a better understanding and help anyone that reads gain the same understanding simultaneously. My guess is that these posts are somewhat entertaining and worth pursuing, if only because the first few I tried have gotten around twenty likes each. If anything, it’s decent writing practice.

via MegaETH whitepaper
via MegaETH whitepaper

You’re probably familiar with the parallel EVM trend, but there’s supposedly a catch to it. Even though progress has been made in porting the Block-STM algorithm to the EVM, it’s said that “the actual speedup achievable in production is inherently limited by the parallelism available in the workloads.” This means that even if parallel EVMs are released and eventually deployed to EVM chains on mainnet, the tech is limited by the underlying reality that most transactions might not require parallel execution.

If transaction B depends on the outcome of transaction B, you can’t execute both simultaneously. If 50% of a block’s transactions are interdependent like this scenario, then parallel execution isn’t as great of an improvement as stated. This is a bit over simplistic (and maybe even a little bit incorrect) but I think it gets the point across.

1-2 other points are highlighted, most notably the gap between revm and native execution, specifically that revm’s capabilities are still 1-2 OOMs slower and isn’t worth pursuing as a standalone VM environment. It was also found that there aren’t currently enough compute-intensive contracts to warrant the use of revm. “For instance, we profiled the time spent on each opcode during historical sync and discovered that approximately 50% of the time in revm is spent on "host" and "system" opcodes, such as 𝑘𝑒𝑐𝑐𝑎𝑘256 , 𝑠𝑙𝑜𝑎𝑑 , and 𝑠𝑠𝑡𝑜𝑟𝑒 , which are already implemented in Rust.”

Opcodes are kind of boring; I looked it up, as the last time I’d read about them in depth was in this Paradigm research post, but they don’t pique my interest. I’m not much of a programmer, and even when I do write actual code, I’m usually not super interested in understanding how it all works. Opcodes do data storage and retrieval, perform some arithmetic operations, manage function calls, and control flow operations. I don’t know what a flow operation is either, so let’s move on. The TLDR is that revm is still slower than what’s necessary and the scalability problem has not yet been solved at the VM level. This presents a case for scaling the EVM, or that’s how I understood it.

via MegaETH whitepaper
via MegaETH whitepaper

On the state sync side, MegaETH found more problems.

State sync is described in brief as a process that brings full nodes up to speed with the sequencer’s activities, a task that can very quickly eat through the bandwidth of a project like MegaETH. An example is used to illustrate this: if 100,000 ERC20 transfers synced per second is the goal, this would cost roughly 152.6 Mbps of bandwidth consumption. This 152.6 Mbps is said to eclipse the estimates (or capabilities) of MegaETH in their assumptions, essentially introducing an impossible task.

This only accounts for simple token transfers, leaving out the possibility of higher consumption if transactions are more complex; a likely scenario given how diverse on-chain activity is in the real world. MegaETH writes that a Uniswap swap modifies eight storage slots (as opposed to an ERC20 transfer which modifies just three), bringing our total bandwidth consumption to 476.1 Mbps, an even less feasible target.

Another problem area of achieving a highly performant, 100k TPS blockchain lies in solving the updating of a chain’s state root, a task that manages the sending of storage proofs to light clients. Even with node specialization, full nodes are still required to maintain a state root with a network’s sequencer nodes. Using the previous problem of syncing 100,000 ERC20 transfers per second, this would incur a cost of 300,000 keys updated per second.

Ethereum uses MPT (Merkle Patricia Trie) data structures to compute state after each block. To update 300,000 keys per second, Ethereum (as an example) would need to “translate 6 million non-cached database reads” which is much larger than the capabilities of any consumer SSD today. MegaETH writes that this estimate comes without even including the write operations (or estimating for on-chain transactions like Uniswap swaps) making the challenge more of a Sisyphean effort than the uphill battle that many of us might prefer.

Tacking on yet another problem, we arrive at the block gas limit. A blockchain’s speed is effectively capped by its block gas limit, a self-imposed barrier designed to increase a blockchain’s security and reliability. “A rule of thumb in setting the block gas limit is that it must ensure that any block within this limit can be reliably processed within the block time.” The whitepaper describes block gas limit as a sort of “throttling mechanism” that ensures nodes can reliably keep up, under an assumption they meet the minimum hardware requirements.

It’s also said that the block gas limit is chosen conservatively to protect against any worse-case scenarios, another example of security trumping scalability across modern blockchain architectures. The idea that scalability is more important than security falls apart when you consider just how much money is moved across blockchains everyday, and the nuclear winter that would (hypothetically) occur if any of this money was lost for the sake of slightly heightened scalability.

Blockchains might not be excellent at attracting quality consumer applications, but they’re great for permissionless peer-to-peer payments. No one wants to mess that up.

It’s then mentioned that parallel EVM speeds are workload-dependent; their performance is bottlenecked by “long dependency chains” that minimize an excessive “speeding up” of a blockchain’s functions. The only way to fix this is through the introduction of multidimensional gas pricing (MegaETH refers to local fee markets on Solana), which is still difficult to implement. I’m not sure if there’s an EIP for this or how this might work on the EVM, but I guess technically it’s a solution.

“Finally, users don't interact directly with the sequencer nodes, and most do not run full nodes at home. Thus, the actual user experience of a blockchain largely depend on its supporting infrastructure, such as RPC nodes and indexers. No matter how fast a real-time blockchain runs, it won't matter if RPC nodes cannot efficiently handle large volumes of read requests during peak times, quickly propagate transactions to sequencer nodes, or if indexers cannot update application views fast enough to keep up with the chain.”

That’s a large wall of text, but a very important one. We’re all dependent on Infura, Alchemy, QuickNode - you name em, they’re more than likely running the infrastructure powering all of our transactions. The easiest explanation of this dependency comes from experience. If you’ve ever tried to claim an L2 airdrop within the first 2-3 hours of launch, you’ll know how difficult it can be for an RPC to manage this congestion.

Closing thoughts

That was a lot of words just to say that a project like MegaETH needs to jump through many, many hoops just to get to where it wants to be. A substack post says they’ve been able to achieve a highly performant devnet through the use of a heterogenous blockchain architecture and hyper-optimized EVM execution environment. “Today, MegaETH has a live and performant devnet and is making steady progress towards becoming the fastest possible blockchain, limited only by hardware.”

MegaETH’s Github lists off a few of these major improvements, including but not limited to: an EVM bytecode → native code compiler, execution engine for large-RAM sequencer nodes, and efficient concurrency control protocols for parallel EVMs. The EVM bytecode/native code compiler is available and goes by the name of evmone, and while I am not proficient at coding enough to know how this might work at its core, I’ve tried my best to figure it out.

evmone is a C++ implementation of the EVM that takes hold of the EVMC API to transform it into an execution module for Ethereum clients. It mentions some other features (that I don’t understand) like its dual interpreter approach (baseline & advanced), as well as the intx and ethash libraries. Altogether, evmone opens up the opportunity for faster transaction processing (via faster smart contract execution), greater development flexibility, and heightened scalability (assuming different EVM implementations can process more transactions per block).

There are some other repositories included but most of these are pretty standard and not specifically relevant to MegaETH (reth, geth). I think I’ve done a decent job of working through the whitepaper, so now I leave it up to anyone that reads this - what are the next steps for MegaETH? Is it really possible to scale that efficiently? How soon will any of this be possible?

As a blockchain user I’m excited to see if this works out, though I’m not holding my breath. I’ve spent too much money on mainnet transaction fees and it’s time for a change, but this change still feels increasingly difficult and unlikely to occur anytime soon.

Despite all of this talk of architectural improvements and scalability, there’s still a massive need for inter-rollup shared liquidity and cross-chain tooling to make the experience on rollup A equivalent to that of rollup B. We’re not there yet, but maybe by 2037 everyone will be sitting back and reminiscing on when we were so obsessed with “fixing” the scalability problem.

Hopefully this gave you some context on what MegaETH is attempting to solve and how they’ll be going about it. Thanks for reading.

Subscribe to Knower
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.