The cornerstone of a cryptocurrency is the database. It records the account balances of all users, smart contract code and state. Any action performed by the user is eventually reflected by executing a transaction and updating the database.
The problem with “Web2” database technology is all the trust that makes it work. It relies upon a trusted third party to maintain and protect the database. If they go offline, then all access to the database is discontinued. If they make a mistake when updating the database, then this mistake can go unnoticed indefinitely. To build public confidence in the database, an auditor can be hired to attest to its integrity by retrospectively checking the validity of updates to the database (a non-trivial task).
The potential to replace a trusted third party with an open-membership group of participants is why cryptocurrencies are leading a paradigm shift in database technology (“Web3”). It allows anyone willing to contribute resources to read, write, audit, and ultimately protect the database’s integrity and content. The group can check updates to the database in real-time which allows them to reject mistakes immediately and detect bugs very shortly after the fact.
The open-membership group is fundamental to the paradigm shift and it can be categorised into two roles:
The mental model for participation is similar to a cryptographic protocol. One set of parties (proposers) want to prove a statement is true and another set of parties (verifiers) must check it is true before accepting it. This is an interactive process that repeats continuously for every update to the database.
However, how to implement the open-membership group poses several important questions:
The system’s architecture and fundamentally what it means for the database to be secure will answer the above questions. We consider it for both layer-1 and layer-2 systems with the end-goal of helping you build a good mental model.
In a layer-1 system, the trusted third party is replaced by public consensus.
The goal is for all participants to agree on an update to the database. This requires a common set of rules (“consensus rules”) that can be applied in an objective way by all parties. The rules are used to verify the validity of an update to the database. One or more proposers can propose a competing update, but eventually all participants will converge on a single update to the database and a single truth about the database content.
The need for public consensus impacts participation:
We take this opportunity to discuss how to restrict who can be a proposer, the trade-offs on wide replication of the database, and who is the ultimate decider on the one true blockchain (and database).
Rate-limit proposers. The goal is to find proposers who have skin-in-the-game and their financial interests align with the long-term prosperity of the network. This is accomplished by allocating the right to become a proposer based on the ownership of a scarce resource (which is financially expensive to acquire). For example, in proof of work, a proposer must have ownership of efficient hardware and cost-effective source of electricity to compete in the mining market. In proof of stake, a proposer must own and lock in-band coins into the process. In both cases, the frequency to propose a new update is proportional to all other participants.
Affordability vs verifiability. The network’s throughput is dictated by the time it takes for an update to be accepted by all participants. There is a trade-off on the network’s throughput and the affordability of transacting during periods of congestion as users compete for their transactions to get accepted before others. In practice, networks like Bitcoin and Ethereum maximise who can participate as a verifier, whereas networks like Solana target low-fees for as long as it can be kept up. Interestingly, verifiers for ICP must be approved and buy hardware from certain suppliers.
Economic majority. Most of the time, we can consider the set of proposers and verifiers as a collective to protect the database. However, the ultimate goal is to convince the economic majority, who have a vested economic interest from a usage perspective. The set of proposers and verifiers are just a proxy for the economic majority during normal operation, but if there is a controversial change to the network’s consensus rules, then it is ultimately the global majority of users who will judge the external financial value of the resultant database. For example, the market capitalisation of Bitcoin vs Bitcoin Cash, and Ethereum vs Ethereum Classic, demonstrates clear winners after significant community disagreements on the path forward.
To summarise, the mental model for a layer-1 system is to consider a database that is ultimately responsible for deciding the ownership of assets and the need for an economic majority to accept all updates to the database. This is why decentralisation is crucial to the success of a layer-1 system from both a technical, social and economic perspective. The goal is to widely replicate a copy of the database as far as possible, maximise who can participate in the process of protecting it and ultimately rely on an economic majority to decide its real-world value.
In a layer-2 system, the trusted third party is replaced with a smart contract and there are two components to consider:
The bridge is responsible for bridging assets from one database (the layer-1 system) to another database (the layer-2 system).
It is the bridge contract’s sole responsibility to protect the bridged assets by checking the off-chain database’s integrity is intact. To uphold its integrity, the bridge checks the validity of every proposed update to the off-chain database (i.e., every state transition applied to the database on the layer-2 system) before accepting it. This is crucial to ensure the assets held by the bridge contract can cover the liabilities recorded in the off-chain database, otherwise it leads to a mass-exit situation.
Upholding the independence of the bridge contract can impact participation:
We consider the architecture for a layer-2 system, how the trust assumptions have evolved, and the purpose of making the database publicly accessible.
Architecture and centralised services. The architecture of a layer-2 system is akin to centralised service (like Coinbase). Users deposit coins into a bridge contract on the layer-1, the deposit is reflected on the off-chain database, and the majority of transactions is processed by the off-chain database. This approach has helped cryptocurrencies scale for the past 12 years as most users interact with centralised services and use the underlying layer-1 system as an interoperability solution to move funds from one service to another service. Historically, it is up to an operator (like Coinbase) to protect the off-chain database and decide if a withdrawal can be processed by the bridge contract.
Evolution of trust. Over the years, we have witnessed the bridge contact change its trust assumptions on how it is convinced the off-chain database’s integrity is intact. It has evolved from a single authority, a multi authority to an external blockchain’s consensus protocol. In all cases, the bridge contract must blindly trust the judgement of a set of parties before releasing assets back to the user. This has led to billions of dollars in theft as it is difficult to take a set of human processes for securing billions of dollars and replicate it across hundreds of bridges. The goal for a layer-2 system is to remove the need to trust the intermediaries altogether and allow the bridge to independently verify proposed updates to the database.
Accessibility of the database. Only the bridge contract can decide what is the one true database and release assets back to the user. The desire to make the database publicly accessible (i.e., the data availability problem) is to guarantee liveness of the layer-2 system. It assumes 1 honest party will emerge who can become a proposer, take a list of pending transactions and propose an update to the database. It is not necessary to create a significantly large mesh network of verifiers to protect the database* or rely on an economic majority to decide which database should have external real-world value.
As such, the mental model for a layer-2 system is about the bridge contract and supporting its endeavour to protect its held assets. It has the sole authority to decide which update to the database is accepted, regardless of what the majority of participants believe. At the same time, a network of participants is still desirable to ensure liveness of the layer-2 system and guarantee that updates are continuously proposed to the smart contract. However, it is not about relying on a global mesh network to protect the database’s integrity (safety property).
*There is a caveat for optimistic rollups as it assumes there is one honest party that will assist the bridge contract with verifying an update to the off-chain database, but ultimately what really matters is the final decision by the bridge contract.
The architecture and goal for both systems are different:
Both systems have fundamentally different trust assumptions. A layer-1 system has to rely upon an honest majority to protect the database’s integrity and an economic majority to give real-world value to assets recorded by the database. Whereas in a layer-2 system, there is no need for majority agreement or externally valuing the assets. It can already assume a layer-1 system with public consensus exists and the only focus is to protect the assets held by the smart contract. As such, it can rely on one honest party to emerge and ensure the system continues to make progress.
In my humble opinion, this is why comparing an L1 to an L2 is like comparing Apples and Oranges. Both systems have different trust assumptions, agent involvement and ultimately system architectures. The only reason our community attempts to make comparisons is because layer-2 systems emerged due to the scalability bottlenecks of layer-1 systems. I have a soft-goal of changing this narrative as layer-2 systems should be viewed as an evolution of bridges. They should be compared with custodial services like Coinbase (who protect >10% of all cryptoassets) as both systems are responsible for protecting an off-chain database and a basket of assets.
To conclude, I hope this article has helped you build a better mental model on the system architectures and trust assumptions for both L1 and L2 systems. It is my hope that layer-2 protocols prevail in the long-term and demonstrate their superiority over custodial solutions. Not because users care about the system's security, but it is my belief that operators can provide exactly the same service without taking on the risk of protecting billions of dollars. As such, custody (and trust) becomes an unnecessary and hindering liability.
Thanks to Simon Brown for comments on the article.