Forget about "optimizing"
December 10th, 2022

Is Solidity optimization a waste of time?

(or: surprising facts about the compiler's optimizer)

Imagine you have a packed struct like this in storage:

Storage Packed Struct
Storage Packed Struct

Can you guess which code will be cheaper to run:

  • the one that’s reading every value from storage several times? (A)

  • the one reading from a storage reference? (B)

  • or maybe the one loading the whole struct into memory first? (C)

Which code will be cheaper?
Which code will be cheaper?

Wrong. Whatever you guessed - you’re not even close!

As Solidity developers, we're constantly on the lookout for ways to optimize our code. We read articles, follow advice from experts, and employ various techniques to ensure that our smart contracts are as efficient as possible. But, as it turns out, many of these optimization techniques may be completely ineffective.

The Solidity compiler's optimizer is incredibly powerful, and often does a better job of optimizing code than we could do ourselves. In fact, our attempts at optimization may actually make things worse, as the optimizer will often rewrite our code in ways that we can't predict (because it’s also dumb lol :)).

So the real answer to the question is - another question: What are the project’s optimizer settings?

Because the gas usage of these three functions will be completely different if:

  • Optimizer is off (rare - can skip)

  • Optimizer is On with default settings

  • We use the new trendy VIA-IR way

The easiest and the most predictable case is without the optimizer: the resulting bytecode will be quite large and bloated and contain many SLOAD operations (like loads), but surprisingly - even for the memory mode.

Optimizer off
Optimizer off

Here the results are at least predictable to the common knowledge: yes, the reading the struct values one-by-one directly from storage is more expensive (+111 gas) than loading the whole struct to memory and reading from there. And Storage Refs are even more (+13 gas) expensive than direct reads.

SLOAD usage without optimizer:

  • Read Directly: 4 SLOADs

  • Read From Storage Ref: 4 SLOADs

  • Read From Memory Copy: 3 SLOADs

You see?

Already everything you’ve been told is a lie - even without the optimizer - the loading of a packed struct into memory takes 3 SLOADs! And direct read (in this case) is 4 - cause we read things 4 times. So what will be the case when it’s 3 SLOAD for storage vs 1 SLOADs for memory?

Let’s enable the optimizer!

With default settings:

Optimizer enabled... WTF?
Optimizer enabled... WTF?

Wait… But I thought… I was told…

What the hell just happened here?

Why is Direct read from storage cheaper than caching the struct into memory?

Cannot trust anyone anymore...
Cannot trust anyone anymore...

So what happens here is: with the optimizer turned on, you will only have two SLOAD operations for the Direct and StorageRef methods, and one SLOAD for the memory method.

But, surprisingly, the memory method still is the most expensive. Why? Because while one additional SLOAD on a hot slot costs 100 gas, the operations required to read the struct into and from memory will cost more than 100 gas - and the Direct and StorageRef stuff will just use stack (which is cheaper than memory). So that’s why the additional costs.

Overall, enabling the optimizer here saved us 1000 gas overall, but had such a weird and unexpected behaviour that I don’t know what I’m doing with my code anymore…

So where is the promised 3 SLOADs-storage vs 1 SLOAD-memory and memory being cheaper? Maybe VIA-IR will give us this?

VIA-IR

Let’s try the new unexplored tech and enable via_ir:

I don't wanna live on this planet anymore...
I don't wanna live on this planet anymore...

How did memory become almost 200 more gas expensive than reading from storage??

Let’s look at the IR code:

Direct on the left VS Memory on the right
Direct on the left VS Memory on the right

The DirectRead and ReadFromStorageRef compiled to the exact same Yul code, so I just include one to save the screenshot space.

So if you use compilation via-IR, all three methods will use only one SLOAD! And yet, the memory method will still be the most expensive (in fact, a whole lot more expensive, by 200 gas) due to 3 more MSTORE and 4 more MLOAD opcodes used to initialize the struct in memory, rather than performing everything on the stack like the smart DirectRead method does.

So where is the holy grail of 3 SLOADs vs 1 SLOAD?

There is none.

No compilation options give the “expected” optimizooor behavior.

That’s just a lie.

It will not happen in real life.

And now you’ll have to live with this knowledge.

Conclusion

So, what can we conclude from this? It's time to reconsider our approach to optimizing Solidity code. We can't rely on our own techniques and tricks, as the optimizer will often undo our efforts. Instead, sometimes we just need to trust the compiler and focus on testing our code’s gas usage to see the real-world impact of our optimizations.

TLDR?

Test for gas usage after every “optimization” you introduce.

And don’t introduce optimizations too early - first get your code to work and make it beautiful - and then strike for lower gas.

P.S.

But, you may be wondering, is it even worth trying to optimize Solidity code at all?

The answer is yes – but only if we approach it in the right way. Instead of trying to outsmart the compiler, we need to work with it. This means using the via-IR compiler whenever possible, and being willing to let go of our preconceived notions about what makes for efficient code. We’re clearly moving towards C world, where the C compiler can already outperform any human effort in optimizing the routines (unless you don’t write really dumb code).

So, the next time you're tempted to use one of those clever optimization techniques you read about online, stop and think.

Will it actually make a difference, or will the optimizer just undo your efforts?

The only way to know for sure is to test your code and see the results for yourself. And, who knows, you may be surprised at what you find.

If you like the stuff I write - subscribe, collect and spread the word!

Subscribe to Convergence Boy
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.
More from Convergence Boy

Skeleton

Skeleton

Skeleton