Two years ago I submitted a bug in Balancer V2 that let attackers create infinite token balances by front-running ERC20 deployments. I was paid $250k for this critical bug and it was made public recently.
It all started with a simple curiosity: what happens when delegatecall
is made to an address that doesn’t have any code? While I already knew the answer, I wanted to confirm it. If the call is placed using assembly, it returns success. However, Solidity adds a check to ensure that the address being delegatecall
-ed has code; if not, it fails. This additional Solidity check can lead to incorrect assumptions and potential vulnerabilities.
This reminded me of another case: when a call
is made to an address without code, it also returns success. We use call
frequently in helper functions to pull ERC20 tokens, mainly because some tokens don’t return a boolean on transferFrom
.
I started by checking the safeTransferFrom
function in Solmate’s ERC20 library. It didn’t check whether the ERC20 address has code (at the time), meaning a protocol relying on a successful safeTransferFrom
as confirmation of fund transfer could be at risk.
Next, I looked at other widely used libraries like Uniswap’s SafeTransferLib
. It also lacks this check, but Uniswap’s core contracts don’t rely on SafeTransferLib
alone—they verify balances after transfers. This means there was no immediate exploitability in Uniswap.
OpenZeppelin’s SafeERC20
, however, does include a check to ensure that the ERC20 address has code before proceeding with transfers, making it safe from this specific issue.
I was searching for a permissionless protocol where anyone could create a market or pool for any asset. The attack scenario I envisioned:
An attacker creates a market or pool for an ERC20 that doesn’t exist yet.
The deposit function calls safeTransferFrom
on the non-existent ERC20, which succeeds.
The protocol records that the attacker has deposited an infinite amount.
Later, when the ERC20 is actually deployed and real users deposit funds, the attacker withdraws their “phantom” balance.
After considering Uniswap and Euler, I turned my attention to Balancer. It seemed like a perfect victim because all funds are pulled by the Vault contract, using safeTransferFrom
. However, Balancer relied on OpenZeppelin’s SafeERC20
, which should have been safe.
But then, I looked deeper. Balancer wasn’t really using the official OpenZeppelin SafeERC20
. They had modified it slightly to save gas—and in doing so, they had removed the critical check ensuring the ERC20 address had code. This meant the bug was present.
One challenge was determining where an ERC20 would be deployed before its actual deployment. While some ERC20s are deployed deterministically (e.g., Uniswap V2 pairs, cross-chain token deployments), these make up less than 1% of tokens. However, a more viable method emerged: frontrunning token deployment transactions. If an ERC20’s deployment transaction is in the mempool, an attacker can predict its address and exploit the bug before deployment.
With this realization, I went to sleep, knowing I might have something significant but not yet fully convinced.
The first attack path I explored:
The attacker creates a Balancer pool between a future ERC20 and WETH.
They add liquidity, which gets recorded, but only need to provide WETH since the ERC20 doesn’t exist yet.
Once the ERC20 is deployed and real users add liquidity, the attacker withdraws it all.
However, Balancer’s pool setup process includes a scaling factor dependent on token decimals, which it fetches via token’s decimals()
function. Since the token wasn’t deployed yet, this call would fail, preventing pool creation.
After trial and error, I found a simpler way. In Balancer V2, the Vault holds funds for both pools and users. Users can deposit funds into the Vault and use them for swaps. Instead of an actual ERC20 transfer, the Vault can deduct a user’s internal balance and increase a pool’s internal balance.
The exploit:
The attacker manipulates their internal balance using the bug.
Once real users start depositing liquidity into any pool they created, the attacker withdraws the funds.
I quickly created a proof-of-concept (PoC), and it worked. At the time, Balancer’s critical bug criteria required more than 1% of the total TVL to be at risk. Balancer had $800M TVL on Ethereum mainnet, meaning an exploit affecting $8M+ was critical.
In the worst case, if an attacker frontran every token deployed after Balancer’s Vault, they could have extracted $120M. While this was unlikely (as someone might detect an infinite balance in the Vault), there were high-volume Liquidity Bootstrapping Pools (LBPs) at the time, particularly via Fjord Foundry.
LBPs allow developers to introduce new tokens by pairing them with more liquid assets, attracting traders. A perfect target for this exploit.
For example:
The attacker frontruns a newly deployed token that gets deployed right before the start of LBP and makes their Vault balance infinite.
An LBP usually lasts a set duration (e.g., Merit Circle’s LBP lasted 14 days).
At the end of the LBP, users have swapped their ETH for the new token.
The attacker, with infinite balance, sell the tokes for all the ETH. (For Merit Circle’s LBP, it was worth $80M at the end)
At the time, multiple high-profile LBPs were happening, and more were expected in the future.
Negotiating the reward was stressful. Since this was not a straightforward case where I could simply state how much was at risk at that moment, it was more challenging. The reward negotiations lasted for weeks. Ultimately, Balancer classified it as a critical bug and paid me $250K, even though the minimum payout for critical bugs at the time was 250 ETH ($400K).
This experience reinforced a crucial lesson: seemingly minor changes (like gas optimizations) can introduce catastrophic vulnerabilities. It also highlighted the importance of verifying security assumptions across all dependencies, even those that appear “safe” due to well-known libraries.
This Balancer bug was one of the most impactful discoveries in my journey so far, and it’s a reminder that curiosity—no matter how simple—can lead to uncovering million-dollar vulnerabilities. Here’s the disclosure post from Balancer: