Disaggregation--The First Principle of Blockchain Optimization

Past two weeks, I've spent a lot of time understanding the latest chain-level innovations, and ‘Disaggregation‘ is a first principle that guides these innovations.

What is Disaggregation

This word is from ‘Modular blockchain‘ (my last post explains why disaggregation instead of Modular blockchain), but it has some differences from origination:

  • Modular is more about one layer of the stack that outsources at least one of the three properties ‘execution, settlement, and data availability’ from operating a blockchain.
  • Disaggregation can be done even in one layer: Execution in accordance with the type class classification of the transaction, even if they are still in one layer.

The logic behind it is very easy to understand: Maximize the efficiency of each unit by making it possible for each unit to perform the simplest possible operations.

Disaggregations in the latest chain-level innovations

Ethereum--Danksharding

During 8th EF Research's AMA, Justin said ‘Ethereum is becoming increasingly modular‘

Screenshot of Justin's explanation on Reddit
Screenshot of Justin's explanation on Reddit

But you should notice that Justin’s ‘modular‘ is way more than the "modular blockchain" currently being discussed by most:

  • ‘Proposer Versus Builder’ is not in the planning of any projects other than Danksharding;
  • ‘Prover versus Verifier’ is also not in the planning of any projects other than Danksharding, while this design is easier for other projects to integrate.

The above two points fit perfectly with the definition of ‘Disaggregation’.

Aptos, Sui

Parallel computing is the core thesis of scaling innovations behind Aptos and Sui.

Parallel computing is a scaling technique that has been proven by Web2, but also has some pre-conditions which are naturally mismatched with the current mainstream monolithic blockchain: There can be no correlation between the transactions being processed in parallel, but there are too many correlations between transactions within one block.

While Aptos and Sui plan to make some pre-processing to figure out this mismatch:

  • An academic approach pioneered by Software Transactional Memory (STM) libraries is to instrument memory accesses to detect and manage conflicts. STM libraries with optimistic concurrency control record memory accesses during execution, validate every transaction post execution and abort and re-execute transactions when validation surfaces a conflict.

And I find that pre-processing is very much in line with the concept of ‘Disaggregation’:

  • Categorize all transactions in the P2P network so that each unit (in this case, multiple cores of the CPU) only needs to execute one category of transactions

Altlayer

Altlayer is a system of highly scalable application-dedicated & pluggable execution layers that derive security from an underlying L1/L2.

If you ask me the differences between Altlayer and other mainstream L2, I would say: Altlayer is an autoscaling solution, while others are more ‘immutable‘.

Let’s imagine the following case:

  • Many NFT projects do not need a dedicated blockspace for eternity but rather only for a short period of time when the mint demand is expected to shoot up >>> Autoscaling can fix this problem perfectly.

The core thesis of Altlayer also fits perfectly with the definition of ‘Disaggregation’:

  • Autoscaling is a very common method used in the traditional cloud market to solve sudden needs >>> Sudden needs are not regular needs that exist steadily, so we need some unconventional designs, some ‘disaggregated design‘, to meet them

Short summary

The general framework of the right blockchain is becoming clearer, but we still need more optimizations to make the blockchain better, so how?

  • One of the best optimizations is to layer the blockchain as a whole and make extreme updates to the single layer(i call it disaggregation)
  • The rest one is next-generation technologies, e.g. ZKP, 10x bandwidth, 10x SDD, and etc. (In case you ask me what is meaningful beside disaggregation :D )

.

Pay attention: Disaggregation can be reorganized for further optimization!

Next Ethereum killer?

There has been a lot of talk recently about who has hopes of being the biggest competitor to Ethereum in the next bull-market, Celestia or Aptos&Sui?

I prefer to say: Non of them, but should include both of them

  • Four Layers: Disggregating monolithic blockchain into four layers has proven its value. (I understand that we could divide monolithic blockchain into more layers, but to make it more understandable, I use four layers here)
  • Celestia’s design with DAS (Data availability sampling) is the best choice for DA layer. (Safer than other choices mentioned above, theoretically)
  • Parallel computing is also valuable and theoretically feasible for further Tx Consensus Layer optimization. And Celestia is the only choice for Txs Sequence Consensus (Currently, one group of people to handle DA and Txs Sequence Consensus at the same time design has the best efficient)
  • Aptos & Sui design has the best performance on Global State Consensus with parallel computing and unique P2P messaging.
  • Developers are able to unlock more possibilities in Execution layer. And Autoscaling should be a good addition.

How about Ethereum when compared with this ‘Optimum Optimization‘?

Ethereum is more security choice but less efficient when compared with this ‘Optimum Optimization‘-- My personal & current opinion

  • Aptos & Sui core innovation (Parallel computing) must involve designs in Global State Consensus Layer, which means it can’t fit Ethereum naturally >>> Ethereum can’t leverage those great innovations, at least for a not too short period of time.

Will these innovations put too many complexities on blockchain?

From Vitalik’s joking on EthCC ‘shall we cancel sharding‘ to @Polynya’s latest ‘4844 and Done‘, there are many discussion about ‘Reducing the complexity of the blockchain’.

My point is more like ‘still not the time for simplification‘: if we look back over the history of cloud industry development, in the early 5~10 years, cloud service were becoming more and more useful but complex. And in recent years, we see more simplifications, e.g. serviceless, IAC. >>>

  • New tech always follows similar route like this ‘find a standard&meaningful framework‘ ➡️ ‘becoming good to use but complex‘ ➡️ ‘simplifing to be easier to use & less costly‘

  • Simplifications are based on things are good enough, but blockchain infra is clearly not good enough. We are still in the first half of the game

I still believe that we could see more innovations in disaggregation in this bear market, and would update my idea when I meet them :D

Subscribe to toddz
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.