Sei Twin-Turbo, A Consensus Engine Review

Seilor rabbit-hole series” Part 3

The “Seilor rabbit-hole series” aims to explain the major Innovation breakthrough of Sei.

Sei is the first sector-specific L1 blockchain, optimized for DeFi applications. After its native order-matching engine and market-based parallelization, we now learn what the Twin-Turbo consensus is all about.

Sei and its Twin-Turbo Consensus engine. Image credits: corrupttmustang.
Sei and its Twin-Turbo Consensus engine. Image credits: corrupttmustang.

This third article of the series, answers how Sei’s consensus engine is capable of reaching hyperspeed thanks to optimistic block production and intelligent block propagation.

The Consensus is the foundation of all the interactions between users in a trust-less manner, used to ensure that all nodes in the network agree on the current state of the blockchain, and therefore the information recorded on it.

To ensure the security of distributed networks, the consensus must meet the condition of Byzantine Fault Tolerance (BFT), which means that the majority of nodes agree on the validity of transactions and remain resistant to faults caused by incomplete information or malicious actors acting as validators.

The ABC(i) of Sei

Sei consensus is a system that achieves fault tolerance. In essence, it’s a big piece of distributed software that shows everyone the same state at the same time and is made of two main components:

  1. The consensus engine, called Twin-Turbo, ensures that the same transactions are recorded on every node in the same order. More about it later.

  2. The interface operating between the consensus and the applications, called ABCI ++ (Application Blockchain Interface), enables the transactions to be processed in any programming language and allows applications to reorder, modify, delay, or add transactions.

Contrary to a Quorum-based consensus engine like Avalanche, where a repeated sub-sampling of votes is taken among the validator nodes. Twin-Turbo uses a Practical Byzantine Fault Tolerance (pBFT) mechanism, which requires all participating nodes to talk to each other, so the network validates transactions with absolute certainty.

Find more about pBFT here and for a consensus comparison between Polkadot, Avalanche, and Cosmos, go here.

Sei Innovation

Ferrari F154 is a family of modular twin-turbocharged engines.
Ferrari F154 is a family of modular twin-turbocharged engines.

Sei is based on a radically optimized version of Tendermint Core.

These major changes can be split into two parts, hence the Twin-Turbo. Intelligent block propagation and Optimistic block processing.

Tendermint provides fast finality but since each node has to communicate with one another, it has quadratic messaging complexity and can finalize one block at a time.

Quadratic messaging complexity is a fancy way of saying that the more active nodes you want to send the same message to, the longer it takes to send the message. This complexity can be quadratic as the messages (to submit the block proposal and validate other blocks) sent between each node grow at a rate that is squared.

Consensus Overview - docs.tendermint
Consensus Overview - docs.tendermint

In the Tendermint consensus, a block is committed when more than 2/3 of validators pre-commit for the same block in the same round, because the pBFT model relies on the main assumption that the number of malicious nodes in the network can’t be equal to or more than one-third of the total nodes in the system.

Tendermint lacks fast block times and features such as native order matching and market-based parallelization. That’s why Sei stepped in with a major suite of innovations.

To learn more about parallel order execution, check our previous article.

Core Architecture (Twin-Turbo)

Ford Mustang powered by a 4.3L Twin-Turbocharged Ferrari V8 engine
Ford Mustang powered by a 4.3L Twin-Turbocharged Ferrari V8 engine

Let’s breakdown the Twin-Turbo consensus engine:

Turbo 1: Intelligent block propagation

Turbo 2: Optimistic block processing.

A. Intelligent block propagation

Regular Tendermint Block Proposal:

The Block proposer (Alice) and Validator (Bob) have 2 mempools with the following block proposal Flow:

Validator Bob receives the Block proposal from Alice (which has the Hash and Block ID) → Waits → receives the 1st chunk (Part of the Txn of the Block) → Waits → receives the 2nd chunk.

While Txn A→E is what is getting sent, Bob has already all the block content(From a Sei internal research: 99.9% of the Txn are already in the mempool).

This is inefficient since validators have to wait for data that they already have, due to network latency.

Regular Tendermint Block Proposal
Regular Tendermint Block Proposal

Sei Consensus

Sei changes the block proposal for increased throughput:

The first part remains the same (Block proposer Alice looks for Txn in his mempool, forms a block, and proposes it to the network)

Then the Block proposal has all the transaction hashes, and the Validator already has this Txn in his mempool. Rather than let the validators wait for the block chunks (subject to network latency), Sei came up with:

  • Optimal Solution (99.9% of the cases): Hashes of Txn from TXN A→E are included in the block proposal, so the validator can construct the block locally (from the mempool data).

  • In the remaining 0.01% of cases: The Validator will fall back to the same mechanism previously used (as per Tendermint Core).

Sei Optimal Solution
Sei Optimal Solution
Block Proposal with Transaction identifiers
Block Proposal with Transaction identifiers

⚡ By implementing this optimization Sei observed nearly a 40% increase in throughput

B. Optimistic Block processing

Regular Tendermint Block Processing:

Regular Block Processing
Regular Block Processing

Block proposer (Alice) sends the block to the Validator (Bob).

Bob will run some sanity checks on it → vote on the block through pre-vote and pre-commit → then will start processing the block.

The validators hold to the block, without doing anything with it, during the whole length of the Pre-vote and Pre-commit (around 150ms each).

This is inefficient since Validators are waiting for the entire length of the prevote and pre-commit without starting to process the block.

Sei noticed that most of the time the 1st proposed block is the one that ends up being processed (with the most votes on it).

Sei Consensus

Sei changes the block processing for lower latency and faster block time.

  • The optimal solution here is Optimistic Processing: During the pre-vote and pre-commit, the Validator is already processing the block, this way the Block is processed faster and the state updates are faster while overall latency goes down.

  • In case of a malicious node or a validator that makes a mistake, validators are taking the proposal, and update a candidate state, if the proposal is successful they will commit that state, if not the state will be discarded. At the next round, at any given height, the block will be processed the normal way (after pre-commit is completed).

Sei Optimistic Block Processing
Sei Optimistic Block Processing

⚡ With Optimistic processing Sei observed nearly a 33% increase in throughput.

⚛️ The final performance achieved is 22,000+ orders per second with 300 ms block times (latest test-net data).


To learn more about the other major innovations brought forward by Sei, check our previous “Seilor rabbit-hole series” articles:

Article by 3Vlabs.io, author Macr0Mark.


References:

Image Credits: corrupttmustang, Voitureblog.


Follow @3vLabs for more #3Vinsights curated by our research collective ranging from early-stage project analysis to brief crypto pills.

Subscribe to 3V Labs
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.