Deploying a Nouns fork is now 4.20 times cheaper
July 29th, 2022

→ Nouns DAO is an NFT based DAO on Ethereum, auctioning one noun a day, forever. For more info go here and here.

Deploying Nouns forks is used to be expensive

One of the key features of Nouns is that their art is stored completely on-chain. Since storing data on Ethereum is expensive, the Nouns protocol uses a run-length encoding (RLE) compression algorithm to compactly store all the pixel information required to render Nouns. You can read how it works here (a bit outdated, but still helpful).

Even after encoding the art, it cost 67.3M gas to deploy the descriptor contract and add the art data to it. That is ~3.7 ETH at the gas prices in August 2021 (55 gwei).

This cost can be a source of friction for projects who want to deploy forks of Nouns. Nouns DAO wants to see Nouns proliferate and used as much as possible, so reducing this cost is a great way to reduce friction.

Nouns DAO has recently upgraded to a new version of its descriptor contract, the contract in charge of the art. This contract cost 15.9M gas to deploy, including storing all the art on-chain.

In this post we will walk you through the recent changes in the way the Nouns art is stored on chain and how we achieved a reduction from ~67M gas to ~16M gas needed to deploy it.

Methods used to reduce storage cost

There were 3 methods we used to improve upon the original Nouns protocol:

  1. Multiline RLE: improve the RLE encoding by supporting multiline runs
  2. SSTORE2: store data using CREATE instead of SSTORE to reduce gas cost
  3. DEFLATE: use general purpose compression to reduce the size of the data

1. Multiline RLE

RLE works by encoding each consecutive “run” of pixels with the same color into a tuple of (color, length). For example, the array of pixels: R R R R R R B B B (R is red, B is blue) would be encoded as: (R, 6) (B, 3).

The initial RLE algorithm used by Nouns was encoding each row of pixels separately. For example, for an image with these pixels:


the encoding would be: (R, 2) (B, 2) (B, 4).

This was improved by encoding this 2-D array in its flattened form:

R R B B B B B B would be encoded as (R, 2) (B, 6).

This improvement reduced the amount of image data needed to be stored from ~60KB to ~52KB, and the gas required to deploy from ~67M to ~60M (10% reduction).

The updated encoder can be found here and the decoder here.


SSTORE2 is a solidity library to “Read and write to persistent storage at a fraction of the cost”.

Instead of using the SSTORE op-code to store data in contract storage slots, it uses the CREATE op-code, which is usually used to deploy contract code, to store arbitrary data. You can see a comparison of the gas cost of using SSTORE vs SSTORE2 in the library’s README.

SSTORE2 becomes cheaper when using it to write bigger chunks of data, so instead of write each image individually, we decided to batch all the images of a trait together.

When batching the images together, we need to know where one image ends and the next one starts. For convenience we used ABI encoding to encode the data as a bytes[] which can be easily decoded in solidity using abi.decode. It’s very likely that there are more efficient ways of encoding the array of images, but we decided to use the convenience of ABI encoding.

This improvement reduces the gas cost to deploy the art from ~60M to ~28M (53% reduction).

You can see the usage of SSTORE2 in the NounsArt contract.


As part of looking for ways to compress the data more, we were searching for general purpose compression algorithms implemented in solidity.

Thankfully we found the inflate-sol library by @adlerjohn which implements a decompression algorithm of DEFLATE, which is the common compression algorithm use in zlib/gzip.

Our plan was to compress the batched data as describe in (2) and write it using SSTORE2. When accessing the data, the data would be read using SSTORE2, decompressed using inflate-sol, and finally decode using abi.decode.

The concern we had here is that inflate-sol is quite gas intensive when decompressing data. Theoretically that wouldn’t be a concern since it would be used in read-only functions of the contracts, and not as a part of a transaction. Practically it is an issue because when performing an eth_call, Ethereum nodes often have a gas limit on it. Specifically the nodes used by NFT marketplaces (e.g opensea, looksrare) have gas limits when calling tokenURI on the NFT contract.

These gas limits are not documented anywhere that we could find. In order to find the limits we deployed an NFT contract where each consecutive NFT consumes 1M addition gas in its’ tokenURI method. For our experiments, we saw that NFT marketplaces and popular node providers (infura, alchemy) can handle at least ~400M gas.

With a few low-hanging gas optimizations, we were able to get the tokenURI method to about 110M gas which felt safe to use.

We deployed the inflate-sol library as a standalone contract to enable anyone to access without needing to redeploy it.

This improvement reduces the gas cost to deploy the art from ~28M to ~14M (50% reduction).

Hope you enjoyed reading. If you have any feedback or question, drop by the Nouns discord at #54-tech-grants.

Arweave TX
Ethereum Address
Content Digest