Assessing Blockchain Bridges

Hello!

Note: Fair warning. This piece is somewhat long and technical. I try to maintain narratives in what I write, but given the nature of what I am writing about today - there is ample jargon. Do excuse it.

Also, follow the team at Socket.tech, who did most of the heavy lifting behind this piece. I have thoroughly enjoyed learning from them throughout this piece.

Last time, I wrote about what bridges are and their role in the evolution of Web3. Unfortunately, a prominent bridge named Nomad got hacked for some $190 million in the following days. This is quite a common occurrence. The famous Ronin hack, which involved some $700 million, was also one involving bridges. Bridges are the equivalent of banks; anyone, anywhere in the world, can break into them with a laptop. And that’s why we need better tools to assess the reliability of bridges before we park more money there. Platforms like Coinmarketcap and DeFillama were crucial in growing the adoption of tokens and DeFi primitives. There needs to be an equivalent for bridges as they evolve.

I have been in touch with the team from Socket for quite some time. For those not in the know - Socket is a bridge aggregator. When an individual wants to go from, say, USDC on Solana to USDT on Avalanche, applications like Socket make it possible to find the best place to make that transfer happen. The challenge is when somebody doing a transfer wants to restrict how the capital is transferred. Say they want to ensure a transfer happens within ten minutes and is routed specifically through bridges with certain security practices. A framework like the one designed by Socket makes it easier for founders integrating bridges to quantify the quality of a bridge and route capital based on specific requirements. 

Today’s piece will look at a methodology to understand how standalone users can assess if a bridge is reliable or not. I am sharing this early framework to assess bridges so other collaborators interested in critiquing the model can reach out and help us iterate on it. The model suggested below will be integrated into a platform querying live data. 

With that out, let’s dig into what makes bridges reliable.

What makes great Bridges

A blockchain bridge is a financial service that runs at scale, powered by smart contracts. This attribute makes them somewhat similar to traditional fintech platforms like Paypal. Instead of humans enabling a transaction - logic and economic incentives are driving these systems. It helps us draw parallels to some attributes paramount to making bridges great. It boils down to

  • Security - How secure your parked assets are on a bridge
  • Connectivity - The number of networks a bridge is connected to
  • Extractable value - The possibility of flashbots or other intermediaries extracting a portion of the transaction
  • Performance - The economic model behind a bridge-related transaction 
  • Capability - The extent of assets supported by a bridge

Last we checked, there are close to 60 bridges supporting digital assets. We will likely see increasing amounts of specialisation. Some bridges will optimise for speed, while others will focus on the variety of assets they support. The framework developed by the team at Socket is reasonably broad, so some of your favourite bridges likely rank lower overall inspite of being one of the best at a single feature. To make it easier to read, I have broken down each section’s parameters and given the max score that an auditor can allocate in a tabular format. We have stuck to using quantitative frameworks as much as possible, but given the nascent nature of the industry - some aspects are qualitative. Let’s dig in.

1. Security

We break down security into four key aspects. If I strip away the jargon - the degree of liveness assumption primarily checks how long a bridge has to dispute a transaction that could potentially be a hack. In the case of banks, there is no written law on how long they have to do AML/KYC on a transaction before it needs to be released. Smart contracts, on the contrary, need pre-defined parameters. 

Bridges with longer dispute times are ranked higher as users know that a capital transaction could be stuck if validators on the network doubt something is off. A recent attack on Synapse was flagged by validators on the bridge - eventually taking the whole system off. It helped the bridge save $8 million overnight.

Ronin’s $600 million+ hack was one of the largest in the industry. It involved breaking into a senior engineer’s computer with a fake job offer and replicating 5 of the 11 validator’s keys. The ideal bridge is one where validators cannot access user funds. The framework we use proposes that single validators having access to tokens should be penalised, while those with validators holding no access to user funds would be ideal.

If a bridge does get hacked, the team can typically make users in one of two ways. One is through bridges acquiring insurance through DeFi primitives such as Nexus Mutual - and the other is through issuing native tokens of the bridge to users in proportion to the amount of capital they had.

The challenge with the latter approach is that users may immediately sell the native tokens they received, creating a flywheel where the bridge’s native asset trends to zero. The ideal bridge is one where pools of capital are kept aside - in a separate smart contract, through token incentives to make users whole in the event of a hack. This would be somewhat similar to the insurance funds maintained by certain exchanges. 

Lastly - under security, we observe the number of audits a bridge has had along with incentives for hackers to notify a bridge about the possibility of it being broken into. Audits on their own don’t mean much. That’s why we emphasise the need for multiple audits and bounties. 

Bounties offered on open platforms like Immunefi are effectively open calls by teams to audit what they have built. Allocating large sums of money to open bounties can attract some of the brightest minds to check smart contracts for potential bugs and report them. Wormhole’s payout of $10 million earlier this year is one of the largest bounties notifying about a potential bug. This is why - for the soundness of the code section, we have taken a mix of the count of audits and capital allocated as a measure for scores.

2. Performance

With most bridges we observed, the cost of switching USDC between networks is either a fixed amount (~1%) or free. The stable asset is routinely moved across chains for yield farming. The costs move exponentially for asset transfers involving cross-chain exchanges where an automated market-maker is involved. What does that mean? Say you are making a transfer of ETH from Ethereum to USDC on Optimism. The fees you pay increase exponentially with the size of the assets involved.

This is because the liquidity for the exchange is sourced from an AMM pool where the cost of exchange increases exponentially. External factors such as the depth of the pool and how it is rebalanced that feed into this. Hashflow, for instance, quotes prices directly from market-makers and is typically able to quote prices that are almost on par with exchanges for multi-million dollar asset exchanges.

We allocate a high score of 5 for pools that require no rebalancing and offer fixed costs while penalising bridges with -1 each for not offering hop transactions and high fees after a low barrier of $10k. An added factor to consider here is the time taken for bridging. We penalise bridges that take north of 1 hour for a bridge while providing 5 to the ones that bridge under a minute. Finally, it is worth noting that some layer 1s like Ethereum may be at a disadvantage here due to longer confirmation times for blocks at times of high congestion.

3. Extractable Value

An added layer of cost for the end user comes through MEV extraction. Again, without going into the specifics - it is when an individual can front-run a transaction occurring on-chain to book a small amount in profit. So far, some $180 million have been extracted as MEV revenue on Ethereum-based dexes alone. One way we could have quantified this metric is through the amount of capital that has gone through MEV extraction on a bridge.

However, high amounts of MEV extraction from a bridge could be a highly used platform. Therefore, a qualitative scale has been given based on how hard it is to extract value from a bridge’s transaction. It is worth noting that bridges that interact with chains that don’t have MEV by default will rank higher here. Bridges building on chains with a high amount of MEV may choose to use protective measures like Cowswap - a DEX aggregator on Ethereum does today. 

Given the extent of scrutiny, Tornado has come under, we believe bridges will be centre-points for sanctions in the future. Currently, sanctions have been done at the address level. At some point, we likely see entire networks, especially ones oriented towards privacy and shielding transactions, being blacklisted. It is hard to quantify censorship resistance on a spectrum - so scoring here would be relative, with a maximum of 2 points given to permissionless and censorship-resistant bridges. 

The last aspect we cover here is of capital churn. In my last piece, I mentioned that it is likely that we will see an increasing number of blockchain bridges optimised for lower capital requirements. I define “capital churn” as the amount of capital flowing through a bridge over 30 days, divided by the total value locked in it. So, for example, certain bridges will have a billion dollars in TVL but enable only ~$100 million in transactions over a month. In this case ($100mil/$1bil), a churn of 0.1 indicates bad capital efficiency. 

Note: Given the number of chains involved, finding churn data for all bridges has been difficult. If you are analytical and want to build this using Covalent’s API - drop me a note. 

On the other hand, there are bridges - like Hyphen and Hashflow that have been doing billions in bridging with a capital requirement of just ~10 million. In this case, the churn is over 100 - and indicates that the system can put capital to complete use without leaving any of it idle. But, again, the metric is raw in that depending on how niche an asset is - and the demand for it, often, bridges will likely have idle assets by default.

4. Connectivities

Connectivities look at the permutations and combinations in which a bridge can interact with different networks. A domain is a layer or network in which an asset is moved. Some bridges have deep liquidity pools focused only on EVM-based chains (ETH, Avax), while others optimise for the breadth of chains. We rank native bridges (like the one Polygon or Celo uses) the lowest as they are usually oriented towards inbound liquidity and limit user choices.

During the earliest stages of bridges, we used to see asset-specific transfers occurring at scale. Wrapped bitcoin moving from Bitcoin to Ethereum was a good example. The next step involved support towards and from L2 solutions like Optimism. The amount of capital flowing between the likes of Solana, Avalance and ETH native L2s has incentivised capital flow between them strongly.

We split the types and number of domains supported in the scoring system. Part of the reason for this is supporting multiple domain types (eg: L2, L1, EVM etc.) does not imply they can communicate with one another. In many instances, bridges restrict the flow of assets depending on their pool rebalancing mechanisms. The amount of capital in a bridge's TVL determines how assets can flow. Today's restricting factor is the effort needed to rebalance pools across EVM and layer types. The ideal bridge can instantaneously support the easy flow of assets across all the domain types they support.

5. Capabilities

We end the scoring system with support for the types of assets supported and the number of assets. We emphasise ERC-20 support due to the high amount of DeFi and consumer applications built on Ethereum today. However, the number of assets supported is kept at ten. In my opinion, that is an arbitrary, low number. For instance, automated market-makers like Pancake swap already support tens of thousands of asset pairs. It is still early in the evolutionary arc of Bridges, in contrast.

We see the need for bridges to support multi-chain NFTs through the likes of OpenSea. Today's largest NFT marketplace already supports NFTs on Polygon, Ethereum and Solana. What if users wanted to port assets between those bridges? Or even better - shortly, we may see cross-chain NFT lending occurring. This would involve querying an asset's price in its most liquid market (eg: Ethereum), trading it through Polygon and taking the loan on Solana. Products like Xp.network  have long been building towards this vision. We do not penalise a lack of NFT support in the scoring system.

The asset flow mentioned above will require the ability of a bridge to interact with a smart contract on the recipient chain. We define this as a "contract call'. Today, applications like DeFiSaver allow users to bridge to optimism and take a loan on Aave in a single click. This makes it possible to create increasingly sophisticated primitives using the composability that historically allowed the DeFi ecosystem to grow into what it became. One instance of this playing out in the wild is Connext's integration with Gelato network last year.

Putting It All Together

This framework, as it stands, is a theoretical approach to rating bridges. Its biggest flaw is that specific attributes are qualitative and require individuals with expertise to give a rating. Just like smart contract audits, the subjective opinions of individuals could be flawed. It also brings relative centralisation and incentive misalignment to the picture. That is why Socket.tech had reached out to L2beat and me to make this framework a collaborative effort. There will likely be multiple iterations before we have something deemed "ideal’.

I do not anticipate individuals to use the framework to assess bridges. For the average person, trying to move assets between Solana and Ethereum - doing a point-based assessment is futile. Instead, I anticipate its usage with stand-alone platforms like DeFi Llama or L2Beat. Providing information for users in a quantitative fashion that ranks bridges could simultaneously help bridges figure out where they lack and direct users towards better service providers. 

We tried the scoring methodology on ten bridges to get an estimate of how they rank. For this scoring, we have given all bridges a standard score of (3) for churn. This is disadvantageous to a few bridges that specialise in capital efficiency, but we had to do it due to a lack of readily available data across all the bridges.

Depending on who you ask, this chart is either a crime or a piece of work.
Depending on who you ask, this chart is either a crime or a piece of work.
This table is for the ones that think the chart was a crime.
This table is for the ones that think the chart was a crime.

The hypothetical maximum score in our framework is 70. The highest we had in the batch of bridges we assessed scores 52. There is a long way to go. It is worth noting that the score itself does not quantify the quality of a bridge. Depending on the use case and need of the user, specific bridges may optimise for a different parameter. We do not want users to rank bridges based on the final scores because the methodology is based on an “idealistic” framework. Each bridge optimises for a different factor - speed, TVL, efficiency, cost etc.

This is where aggregators come into play. Bungee. exchange, for instance, allows users to see various options for each transfer. Users can then pick which option suits them the best. I have been discussing aggregators with a team building bridges for a while. The core idea is that aggregators may come on top in the bridge wars, given their ability to mix and match the feature subsets of each bridge interfacing with them. We’ll dig deeper into that in the final piece on the bridge series, which should be live sometime next week.

Join us at Telegram if you enjoyed reading this piece. And make sure to leave a comment if you agree or disagree with how we have modelled this scoring system.

Joel


This piece was originally published on Decentralised.co. Subscribe below to the added to the mailing list there

Also, collect a portion of this piece using the mint button below.

I may or may not set up a discord for those with parts of the piece. Who knows. For now, it is free to mint and has no use case.


Subscribe to Joel John
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.