In the dynamic and rapidly evolving world of decentralized finance (DeFi), Proof of Liquidity (PoL) stands out as an innovative concept poised to redefine protocol stability and sustainability. While many have explored PoL, none have delved into it from a developer's perspective. This article aims to bridge that gap by practically designing and building Proof of Liquidity from the ground up.
Now, let's dive in.
To build digital currencies, two problems need to be solved:
Can I trust the money is authentic and not counterfeit?
Can I be sure that no one else can claim that this money belongs to them and not me? (aka the “double-spend” problem).
One of the notable advantages of paper money is that it addresses the double-spend issue easily because the same paper note cannot be in two places at once. When transmitted digitally, counterfeiting and double-spend issues are handled by clearing all electronic transactions through central authorities that have a global view of the currency in circulation.
Earlier researchers tried to build digital currencies using cryptographic digital signatures to enable a user to sign a digital asset or transaction, proving ownership to address the double-spend issue.
However, those early digital currencies were centralized and, as a result, were easy targets for governments and hackers. Early digital currencies used a central clearinghouse to settle all transactions at regular intervals, much like a traditional banking system.
The key innovation for Bitcoin was to use a distributed computation system (called a “Proof-Of-Work” algorithm) to conduct a global “election” around every 10 minutes. Miners need to do hard computational work to solve the puzzles while it remains easy to check for the verifier or service provider. This design allows decentralized peer-to-peer networks to arrive at a consensus.
In addition, Bitcoin is built with the “UTXO” model (the unspent output is modelled as the input for the next transaction) and only allows the use of the basic script. It only models basic transaction behaviour and thus can’t carry on complicated business relationship logic. Think of this basic model as ancient times when people lived in forests and only exchanged their food with each other (basic transaction needs). Also, the miner’s hard work in solving the puzzle makes the TPS (transactions per second) of each block low.
To build complex decentralized applications, the language we use to write those applications needs to be Turing complete. The founder of Ethereum came up with the EVM (Ethereum Virtual Machine) with a set of predefined opcodes. This change enables developers to build dApps with complex logic, leading to the development of dApps like DEXs and lending platforms today. The designer of Ethereum uses the account-based model rather than UTXO. This design enables the smart contracts that developers write to interact with other on-chain entities. (What a great innovation!)
Originally, Ethereum adopted the Proof of Work (PoW) consensus algorithm. As more interactions occur between dApp users and dApps, the amount of data spurs. Proof of Work consensus doesn’t suit the need anymore. The Ethereum Foundation decided to upgrade to Proof of Stake (PoS).
So, is Ethereum 100% Proof of Stake? Not exactly. At its core, Ethereum uses PoS, but it specifically employs a consensus algorithm called Gasper. Gasper is a combination of Casper the Friendly Finality Gadget (Casper-FFG) and the LMD-GHOST fork choice algorithm. The Nakamoto consensus follows the Longest Chain Rule, selecting the longest chain in the blockchain during a fork.
Therefore, Ethereum hasn't completely abandoned PoW; it retains basic block generation under PoW while integrating PoS elements on top. As the amount of data and people interacting with Ethereum continue to spike, the Ethereum mainnet faces the problem of providing a less-than-great user experience.
To solve the “traffic jam” on the Ethereum mainnet, the answer given by the Ethereum foundation is to build layer 2. So we see dozens of layer 2 solutions coming up: Optimistic Rollups, and zk-Rollups…
Layer 2 chains bundle up transactions and send those bundled transactions back to the mainnet. As long as we can sync state, we assume we are good. Hmm, does it seem right?
Those layer 2 chains face two problems:
Some of the technologies they deploy are much more centralized, and centralization brings more security concerns.
They do not generate real traffic. While large grants are given to attract more dApps, the users are not buying this.
So, we understand there is still a need to build a layer 1 chain because of the problems discussed above. The problem lies in how we design the layer 1 chain that could attract real traffic.
Firstly, let’s use a simple analogy to model the current Proof of Stake (PoS) chain and think of the chain as a decentralized nation that we live in.
PoS Validators: all the PoS validators act like the central bank issuing the new currencies (tokens) used by our economy.
Dapps: think of Dapp builders as the producers or business owners that produce the service or goods for users.
Users: think of users as the consumers.
With the current PoS model, the producers (dApps) and consumers (users) who are providing real value to the chain are not rewarded enough, and most chain rewards go to the PoS validators (OMG, it seems like the bank took most of the money from all the hard-working people. The hard-working people are enslaved by the bank!). Without any rocket science proof, only simple intuition, we know what’s not working. So, what should be an effective working model?
Before we dive directly into the framework design, let's use some rocket science to come up with the design directions.
The Fisher Equation, MV = PT, is central to the Quantity Theory of Money, where M is the Money Supply, V is the Velocity of circulation, P is the Price Level, and T is Transactions. Traditionally, it monitors currency inflation and deflation. On the blockchain, P should be viewed as asset value, so the focus isn't solely on inflation and deflation. The formula still applies, with MV reflecting macro-level activities and PT representing micro-level activities. In other words, we can use MV to measure the overall economy of scale of the chain. We can either increase M or V to achieve a greater economy of scale.
While most chains use the high FDV model to increase the supply of the token (reward the validators), this causes high inflation due to unfair distribution, leading to a lack of useful utilities of the chain and a shrinkage of the economy of scale as the final result. The new thought is to find a good way to increase the velocity of token circulation to achieve a greater economy of scale.
So, now our goal can be broken down as follows:
We need PoS to ensure the chain’s security.
We need to fairly distribute the rewards to all parties (producers-dApps, consumers-users, and PoS validators).
However, we are facing a dilemma right now. Imagine that you are the validators; who would agree to let other parties take pieces of the pie from you while you could easily find more rewarding opportunities from other L1 chains? This design could lead to no validators buying in…
Let's imagine we have already constructed the PoL incentivization tech stack that aims to ensure fair distribution of rewards among producers (dApps), consumers (users), and PoS validators, thereby incentivizing velocity (V) in the Fisher Equation (MV = PT). We ask ourselves now, in what way should we build the rest of the technology stack?
Here's a summary of what it takes to build a blockchain:
Network Module: Handles communication between nodes in the Ethereum network, including transaction and block propagation.
Consensus Module: Ensures all nodes agree on the blockchain's state.
Data Module: Contains the blockchain, a chain of blocks each holding a list of transactions.
Execution Module (Optional): Executes smart contracts via the Ethereum Virtual Machine (EVM), updating the blockchain's state accordingly.
Application Module (Optional): Where decentralized applications (dApps) run and interact with smart contracts.
Since we're building Layer 1, so that's not reinventing the wheel, it's easier to adopt a modular approach rather than a monolithic one. We can leverage the Cosmos SDK and select the components we need instead of forking Ethereum.
PoL is built on top of PoS (Proof of Stake). But why choose PoS? Let's break it down:
With a goal in mind to incentivize the velocity of circulation, the chain's transactions per second (TPS) can't be too low. PoW (Proof of Work) requires heavy computational work, which slows transaction processing and isn't cost-effective for validators. This eliminates PoW from our options.
PoS and its variants are more efficient. One of the stand-out mechanisms is Solana's PoH (Proof of History) uses a sequence of computations to create a historical record of events. Validators verify transactions by checking PoH timestamps. However, PoH increases centralization, which isn't ideal for PoL.
There are two key considerations in our decision-making:
Modularity: We need a flexible and adaptable consensus mechanism.
Stake Flexibility: Unlike Ethereum, which requires a fixed 32 ETH stake to become a validator, we prefer a model with more flexible staking requirements.
Tendermint BFT, which uses a DPoS (Delegated Proof of Stake, which we’ll cover in more detail later on) model, fits these criteria. Validators are elected based on the amount of staked tokens and delegated tokens, offering both modularity and flexibility.
Neither Ethereum nor Cosmos offers the perfect solution for our needs. Ethereum's network module is intertwined with other modules, while Cosmos Hub has a limited number of validators and slower transaction handling for entering the Mempool.
Our approach -
Start with Tendermint Core.
Gradually modify the code to incorporate Ethereum-like features.
Here's a recap of our choices:
We opted for a modular approach, relying on the Cosmos tech stack.
We selected Tendermint BFT as it maximizes flexibility and compatibility with our incentivization module (the essence of PoL) that we constructed earlier.
We decided to rework the network module for modularity and optimal performance.
This approach mirrors Beaconkit's methodology.
As parallel EVM becomes a more mature industry solution, PoL can adopt it to enhance performance during high-traffic periods. This option is less centralized than PoH and can significantly improve TPS during periods of heavy traffic.
Taking a look at the wrong way of building PoL gives us a better understanding of what the right elements are in order to build real Proof of Liquidity.
Let’s start exploring this now…
Without much thought, this is the initial model that jumps to mind. Let's say we are only issuing one token, $BERA, which is our governance token.
For any validator to successfully validate each transaction
60% of the $BERA reward goes to validators to reward them for successfully securing the chain
25% of the $BERA reward goes to dApps to reward them and keep them working to produce valuable services for the chain
15% of the $BERA reward goes to users to incentivize them to keep engaging with the chain
What does this model look like? This model looks exactly like Soviet-type economic planning. Even if we have some democratic procedures to have a DAO, what did I warn you about in the first thread?
Imagine that you are the validators; who would agree to let other parties take pieces of the pie from you when you could easily find more rewarding opportunities from other L1 chains? Validators are not incentivized to go with this model design.
This time, we're redesigning the conditions we need to keep all parties incentivized.
Let's list them all out and start to construct our structure:
Users (consumers) should be rewarded with some reward token when they have economic engagement with the chain’s dApps (producers), and the reward to users (consumers) should come directly from the dApp (producers) itself. Everything should be based on economic activities.
Users (consumers) can freely engage with dApps (producers) based on the potential “APY”.
There should be some reward tokens coming from validators to reward dApps (producers).
dApps (producers) can freely choose what economic relationship to have with validators based on this reward token's "APY" from each validator.
Validators should receive returns from the dApp/user engagement activities which they think are greater than the token that they allocated. This will incentivize them to allocate some reward tokens down the path.
Validators should compete with each other to earn more reward tokens.
So now we get rid of the Soviet-type economic planning and embrace the free market.
We uncover the nature of this reward token - it needs to circulate within the ecosystem. Circulation is the essential property of this reward token.
However, herein lies the problem... its properties contrast with the typical PoS design.
In PoS design, validators want to stake tokens to earn and accumulate more tokens, and add their centralization power, not to circulate them.
On account of this, we must design a new token...
Having covered the challenges of building the PoL system, let's now address the issues and build it out effectively.
With the contrasting properties of a one-token system and the nature of PoS, we are forced to design a different, two-token system with the following properties:
This new token must have some relationship with the original $BERA token.
It must have potentially higher economic value than $BERA, so Validators are willing to hold the token and use it for circulation instead of converting it back to BERA.
Let's give this token a name. We'll call it $BGT. So now we have our two token systems for our blockchain. Let’s take the top-down approach to complete the rest of the design.
In order to incentivize validators to assign the proportion of the $BGT to the rest of the parties, they need to compete with each other. So validator competition is a key part of our design.
Here is the design work that makes them compete with each other:
We cook the Vault between the Validators and the Dapp. In this Vault, Dapps (producers) can allocate some of their governance tokens to the Vault. Dapps (producers) can freely choose a vault, and the amount of the governance token from the various Dapps is going to calculate the $BGT weight. So in this way, we make the Validators compete.
We adopt Cosmos’ DPoS (Delegated Proof of Stake) so validators can delegate to other Validators.
The last question is how do we get $BGT redistributed to the users, because the design that we came up with now “produced” some $BGT in the Vault. So we can use the vault as one of the gateways.
Users deposit some assets to the Vault and they can have some way directly or indirectly to receive the $BGT. They need to show proof that they made the deposit to the vault.
How about the $BGT that protocols received, how can we make $BGT come down to the users from this path?
Naturally, here is an easy solution that we can come up with. As a chain, we can build the DEX ourselves (fair play) so users can deposit the protocol’s governance token into the DEX’s LP pool and earn some $BGTs.
Now connecting everything that we have built so far, here is what we have come up with:
Let’s review the properties of $BGT
This $BGT must have some relationship with the original $BERA token.
$BGT must have potentially higher economic value than $BERA, so validators are willing to hold $BGT and use it for circulation instead of converting it back to $BERA.
Let’s finish the last part of the design work - here is the relationship that we want to establish between $BGT and $BERA. $BGT can be burned to $BERA at a 1:1 ratio, but anyone holding $BERA can’t use $BERA to buy $BGT. Implementing this design will further accelerate the circulation of $BGT and reduce stake centralization.
So, what’s the definition of $BGT?
$BGT is the derivative of $BERA because holding $BGT has more economic value and it has zero risks. Such a thing can only be done on the blockchain!
But let’s think from a real-world perspective on how we can make $BGT “legitimate”. Imagine that you are only allowed to issue one token; otherwise, you are confusing the investors who invest in you. So here is the way that you can do it.
You issue X amount of the $Bera and reserve ½ X amount of Bera in a 100% “safety deposit box”.
The “safety deposit box” then issues you the receipt of stBERA to demonstrate that you made the deposit.
Then you show this receipt stBERA to some magic box, and this magic box will then burn ½ $BERA in “safety deposit boxes” and mint ½ $BGT. After finishing all the steps, you issued ½ X amount of $BERA and ½ X amount of the derivatives of $BERA, which is $BGT.
We've so far covered the rationale, challenges, and foundations of the 2-token system. Now we extend this to the tri-token model to complete our quest and structure the ideal PoL mechanisms.
A quote from the Ooga Booga Founder:
“Berachain's technical architecture operates under a "tri-token model". Some argue that the dubbing of this model is inaccurate, contending that the $HONEY stablecoin doesn't directly injunct itself with the Proof of Liquidity consensus model.”
Kevin made some good analysis of the use cases of $HONEY in this thread:
Kudos for the good work, Kevin.
I am going to present use cases for $HONEY and why they are important from an economic point of view.
Before I dive into this, let’s take a quick look at the US Dollar’s history.
The Bretton Woods system was developed as an international monetary exchange arrangement. This system took currencies belonging to 44 countries and pegged them against the value of the US dollar.
The US dollar itself was pegged against the price of gold. This system was in use between 1945 and 1973.
In 1973, the US was short on gold. Gold reserves could not meet the value of dollars in circulation. Economists’ attempts to revitalize Bretton Woods failed.
In 1973, the Bretton Woods agreement collapsed - it ceased to exist.
From the Bretton Woods system, we can draw an important comparison to a blockchain:
The number of on-chain assets that can exist on a chain depends on how many native tokens (BERA + BGT) are reserved.
If on-chain assets issued are much greater than the native token, our previously designed 2-token system will collapse.
Previously, The Fisher Equation, MV = PT, is central to the Quantity Theory of Money, where
M is Money Supply
V is the Velocity of circulation
P is the Price Level and
T is Transactions
We introduced Proof of Liquidity to make smart uses of Fisher’s formula by increasing V to increase the overall on-chain economic scale.
Now, let’s reinvent M. Here is the formula that we come up with when we introduce another token (ideally stablecoin) - let’s call this token $HONEY.
(M1 + M2)V = [P1 (M1/M1+M2) + P2 (M2/M1+M2)]*T
where
M1 is the supply of $BERA + $BGT
M2 is the supply of the $HONEY
P1 is the price of the $BERA
P2 is the price of $HONEY.
When $HONEY can be 1:1 converted to other stablecoins back and forth, the price is 1.
However, we can’t just print stablecoins. If we do so, we are generating the token out of thin air.
$HONEY needs to have more use cases than USDT/USDC on Berachain so people would want to use it. In other words, $HONEY needs to be more “valuable” than the $USDT and $USDC and other types of stablecoins.
So, let’s build some use cases for $HONEY. We build the lending (Bend) and perps (Berps) engines to use $HONEY giving it real value tied up to the chain.
And there we have it, the architecture of the tri-token model is constructed.
I hope you enjoyed this beducational journey and our quest to build the mechanics of the PoL system are complete.
Beras in control - Pot in control!
Ooga Booga 🐻⛓️