Introduction to Capx Chain Architecture

An overview of Capx Chain architecture

Capx Chain is an EVM-equivalent validium to scale Ethereum that enforces integrity of transactions using validity proofs like ZK-rollups, but doesn’t store transaction data on ethereum mainnet. Technically speaking, Capx Chain is built upon two major pieces. The core piece is the polygon zkEVM, which is used to prove the correctness of EVM execution in Layer 2. We have been contributing to various zkEVM components and learning from the team at 0xPolygon for some time now. But to turn the Capx Chain into a full validium on Ethereum, we also need to implement a complete L2 architecture and data availability solution around it.

In this post, we give an overview of Capx Chain overall architecture. More specifically, we will cover the initial version of Capx Chain which is composed of a centralised sequencing,aggregator node and decentralised data availability network. We are committed to decentralising the set of sequencing nodes in the future and will share our design for this in future articles.

The current architecture consists of three infrastructure components (see Figure 1):

  • Capx Validium Node : Constructs Capx Chain batches from user transactions,executes them on Capx Chain and commits transaction hash of execution to the Ethereum base layer, and passes messages between ethereum and Capx Chain.

  • zkProver : Generates the zk validity proofs to prove that transactions are executed correctly.

  • Rollup and Bridge Contracts: verifies zk validity proofs, and allows users to move assets between Ethereum and Capx Chain.

  • Data availability committee (DAC) Contract & Node : Provides data availability for Capx Chain transactions.

In what follows, we detail the role of each of these components.

Capx App Validium Node

The Capx App Validium node is the main way for applications and users to interact with Capx Chain. It consists of five modules, the Sequencer, Aggregator, Synchronizer, RPC and State.

The Sequencer provides a JSON-RPC interface using the RPC module and accepts Capx Chain transactions. Every few seconds, it retrieves a batch of transactions from the Capx Chain mempool and executes them to generate a new Capx Chain batch and a new state root.

While publishing a series of transaction hashes of the executed transactions on L2, the Sequencer must pay a fee in Capx tokens. This sum will change depending on the pending batches that need to be validated. If a sequencer shows malicious behaviour by posting invalid transaction hash or creating batches with just one transaction, the protocol ensures that it will be very expensive to break the chain. This ensures that publishing invalid transactions will result in a loss for the sequencer.

Once a new batch is generated and posted on ethereum, the Aggregator is notified and receives the execution trace of this batch from the Sequencer. It then dispatches the execution trace to the prover (zkProver) for proof generation.

The Synchronizer watches the bridge and rollup contracts deployed on both Ethereum and Capx Chain. It has three main responsibilities. First, it monitors the rollup contract to keep track of the status of Capx Chain batches including their data availability and validity proof. Second, it watches the deposit and withdrawal events from the bridge contracts deployed on both Ethereum and Capx Chain and relays the messages from one side to the other. Third, it is also responsible for handling possible reorgs, which will be detected by checking if the last ethBlockNum and the last ethBlockHash are synced.

The state subcomponent implements the Merkle Tree and connects to the DB backend. It checks integrity at the block level (information related to gas and block size, among others) and some transaction-related information (signatures, sufficient balance). It also stores the Smart Contract code into the Merkle tree and processes transactions using EVM.

zkProver

The zkProver serves as provers in the network that are responsible for generating validity proofs for the validium. All the rules for a transaction to be valid are implemented and enforced in the zkProver. Figure 2 shows how a prover generates the validity proof for each batch. The process consists of the following steps:

  1. Turn the required deterministic computation into a state machine computation.

  2. Describe state transitions in terms of algebraic constraints. These are like rules that every state transition must satisfy.

  3. Use interpolation of state values to build polynomials that describe the state machine.

  4. Define polynomial identities that all state values must satisfy.

  5. A specially designed cryptographic proving system (e.g. a STARK, a SNARK, or a combination of the two) is used to produce a verifiable proof, which anyone can verify.

Rollup and Bridge Contracts

Capx Chain connects to the base layer of Ethereum through the Rollup and Bridge smart contracts. Together, these ensure proof validation for Capx Chain transactions and allow users to pass assets and messages between Ethereum and Capx Chain.

The Rollup contract receives Capx Chain state roots and the executed batch transaction hash from the Sequencer. It stores state roots and the executed Capx Chain batch transaction hash. This provides data validation for Capx Chain blocks and leverages the security of Ethereum. Once a batch proof establishing the validity of an Capx Chain batch submitted by the aggregator has been verified by the Rollup contract, the corresponding block is considered finalised on Capx Chain.

The Bridge contracts deployed on the Ethereum and Capx Chain allow users to pass arbitrary messages between Ethereum and Capx Chain. On top of this message passing protocol, we have also implemented a trustless bridging protocol to allow users to bridge ERC-20 assets in both directions. To send funds from Ethereum to Capx Chain, following steps are undertaken

  1. The Bridge function of the Capx Chain Bridge Smart Contract on the Ethereum network is invoked. If the Bridge request is valid, the Bridge Smart Contract appends an exit leaf to the L1 Exit Tree and computes the new ethereum Exit Tree Root.

  2. The Global Exit Root Manager appends the new ethereum Exit Tree Root to the Global Exit Tree and computes the Global Exit Root.

  3. The Sequencer fetches the latest Global Exit Root from the Global Exit Root Manager.

  4. At the start of the transaction batch, the Sequencer stores the Global Exit Root in special storage slots of the Capx Chain Global Exit Root Manager Smart Contract allowing L2 users to access it.

  5. In order to complete the bridging process, the user calls the Claim function of the Bridge Smart Contract and provides a Merkle proof to the fact that the correct exit leaf was included and represented in the Global Exit Root.

  6. The Bridge Smart Contract obtains the Capx Chain Global Exit Root Manager Smart Contract's Global Exit Root and validates the user's Merkle proof of inclusion. If the Merkle proof is valid, the bridging process succeeds; otherwise, the transaction fails.

To send the Funds from Capx Chain to Ethereum the following steps are done :

  1. The user calls the Bridge function of the Capx Chain Bridge Smart Contract on Capx Chain. If the Bridge request is valid, the Bridge Smart Contract appends an exit leaf to the Capx Chain Exit Tree and computes the new Capx Chain Exit Tree Root.

  2. The Capx Chain Global Exit Root Manager Smart Contract is called to append the new Capx Chain Exit Tree Root to the Global Exit Tree and compute the Global Exit Root.

  3. The Aggregator generates a ZK-proof attesting to the computational integrity in the execution of sequenced batches (where one of these batches includes the user's bridging transaction).

  4. For verification purposes, the Aggregator sends the ZK-proof together with all relevant batch information that led to the new Capx Chain Exit Tree Root (computed in step 2 above), to the Consensus Contract

  5. The Consensus Contract utilises the verifyBatches function to verify validity of the received ZK-proof. If valid, the Consensus Contract sends the new Capx Chain Exit Tree Root to the Global Exit Root Manager Smart Contract in order to update the Global Exit Tree.

  6. In order to complete the bridging process on the ethereum network, the user calls the Claim function of the Bridge Smart Contract, and provides a Merkle proof of the fact that the correct exit leaf was included in the computation of Global Exit Root.

  7. The Bridge Smart Contract retrieves the Global Exit Root from the ethereum Global Exit Root Manager Smart Contract and verifies validity of the Merkle proof. If the Merkle proof is valid, the Bridge Smart Contract successfully completes the bridging process. Otherwise, the transaction is reverted.

Data Availability Contract & Node

Data Availability Contract

The Data Availability (DA) smart contract, sometimes referred to as the "Guardian" contract, plays an essential role in ensuring the integrity and accessibility of transaction data that's stored off-chain. Here's what it does:

  1. Data Storage and Retrieval: While zkValidium maintains most of its transaction data off-chain to optimize scalability, this data still needs to be reliably stored and retrievable to maintain the system's integrity. This is where the DA smart contract comes into play. It manages off-chain data, ensuring it's properly stored and can be accessed when necessary.

  2. Guarding Against Data Withholding: One of the potential risks with zkValidium is that the Sequencer could potentially withhold data, as the detailed transaction data is not posted on-chain. The DA smart contract can help mitigate this risk. In some implementations, a group of trusted third-party "committee members" can be selected, who are given the transaction data by the sequencer. These committee members can, in turn, provide this data to users if the sequencer fails to do so, offering a layer of protection against data withholding attacks.

  3. Periodic Checkpoints: The DA contract can also provide a mechanism for creating periodic snapshots or checkpoints of the off-chain data. These checkpoints can be used to track the state of the off-chain data at regular intervals, providing a kind of history or audit trail.

  4. Enforcement of Rules: The contract enforces the rules of data submission by the sequencer to the guardians and maintains the zkRollup’s validity even if the sequencer becomes malicious or is offline.

Data Availability Node

The DAC Node works closely with the Sequencer to ensure secure and efficient data handling. The process can be broken down as follows:

  1. Batch Formation: The Sequencer collects user transactions and organises them into batches.

  2. Batch Authentication: Once the batches are assembled, they are authenticated. The Sequencer forwards the batch data and its corresponding hash to the DAC.

  3. Data Validation and Storage: The DAC nodes each independently validate the batch data. Once validated, the hash is stored in each node's local database for future reference.

  4. Signature Generation: Each DAC node generates a signature for the batch hash. This serves as an endorsement of the batch's integrity and authenticity.

  5. Communication with Ethereum: The Sequencer collects the DAC members' signatures and the original batch hash and submits them to the Ethereum network (DAC CONTRACT) for verification.

How does Capx’s Validium work?

Putting these three architectural pieces together, we can now explain the workflow of Capx’s Validium rollup, summarised in Figure below.

L2 batches in Capx Chain are generated, committed to base layer (Ethereum), and finalised in the following sequence of steps:

  • The Sequencer selects transactions from mempool and  generates a sequence of chosen transactions called batch. For the i-th batch, the Sequencer generates an execution trace T and sends it to the Aggregator and other Capx Chain Nodes.It also sends this Batch to the data availability Node for validating, storage and issuing a signature for batches authenticity and storage guarantee. Meanwhile, it also submits the resulting state roots and commitments and the transaction Hash to the Rollup contract as state.

  • The Aggregator selects the trusted Prover to generate a validity proof for each block trace.

  • After generating the block proof P for the i-th batch, the Prover sends it back to the Aggregator.

  • Finally, the Aggregator submits the batch proof P to the Rollup contract to finalise L2 batch i by verifying the block proof against the state roots and transaction hash commitments previously submitted to the rollup contract.

The following images illustrates how Capx Chain batches will be finalised on ethereum in a multi-step process.

Batch Sequencing:

Batch Aggregation, Proof Generation and verification:

Overall Transaction Journey:

Each Capx Chain batch will progress through the following three stages until it is finalised.

Putting all of these together, Capx Chain is able to execute native EVM bytecode on L2 while inheriting strong security guarantees from base layer Ethereum. In the next post in this series, we will explain the workflow for how users can interact with the Roll up.

We have designed Capx Chain architecture to align with our vision and values and our technical principles. In upcoming articles, we explain how Capx App will use this architecture to provide a more scalable user and developer experience on Ethereum.

Subscribe to Capx
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.