The move fast break things playbook every engineer will hate

Common traps that developers fall into which stops them from moving fast and breaking things. I’m going to invite a lot of shit on myself for this - but ok …

I’ve studied spent my graduate school research on software engineering practices, even have a published research paper in an A conference. I was a strong advocate of everything I mention below, until startups happened to me.


Just stop doing it. 99% of us write the exact wrong test cases. You’re probably writing poor testcases. If you think you’re writing great test cases, 99% chances that you’re lying to yourself.

There are two categories of mistakes on makes in writing testcases.

Unit test cases

If you’re even half a good engineer, you’d have configured the CI/CD to fail when test cases fail. So you can’t build your code unless all the test cases pass. But you’re probably writing experimental code - because, we’re trying to move fast and break things, remember? No one wants to write more throwaway code, i.e. unit test cases. So they typically end up with one of the following options

  1. Disable tests “temporarily”
  2. Comment out relevant test cases
  3. Hardcode testcases outputs
  4. Write a poor testcase that will pass

The most dangerous is #4. It’s the most common lie to tell oneself that they’ve written a good testcase, when they’ve actually winged it.

Writing a testcase is a dedicated engineering effort. It cannot be done justice to if it’s being written in the short duration right after you’ve written some amazing code that you want to see the output of. It’s impossible to write a good testcase when you’re in the rush to get rid of the tests passing and see the output of your code.


This is the worst thing a CTO can do to their company - imposing TDD. TDD requires you to know the functionality to be known - almost exactly - before the first line of code is written. Granted, you write better test cases this way, but the assumption that you would have all the code flows figured out is in my experience unlikely.

The product, design, business teams are also changing super fast - because they’re trying to move fast and break things too. So requirements change and you have to keep updating the test cases. If the test cases of TDD keep changing, it defeats the entire purpose of TDD which is - you know what the output should be and you write all the code needed to get to that output all at once. It optimizes for programming design decisions to be taken when the outputs are known and well defined. That is seldom the case in an environment that’s trying to move fast and break things.


I’ve refactored at one point 100K lines of code. Probably 200K in my life. Never, not once in my life, have I come out of a refactoring exercise and said - “wow, my code is just perfect”. I have definitely caught myself saying “I’ll do that in the next refactoring”.

Usually when an engineer says “this needs refactoring”, it’s probably true. There can be a better way to do this if we knew what we know now, when we started to write the code. But if you do take on the refactoring task without stopping product development, new assumptions will creep in. Many times, the new assumptions you made in the refactoring makes it even worse for incorporating the new product changes.

It’s almost always a net negative exercise to refactor the entire code base.


This is *Refactoring’*s evil elder sibling.

An engineer coming and looking at the existing codebase usually says “I can rewrite the entire thing in a week and it’ll be better”.

There are two problems with that. Firstly, it never gets done in a week. Never. Prove me wrong. Never.

The second reason is the same as the problem with refactoring code. By the time you rewrite the thing, which is usually several months rather than a week, the product requirement change. And again, the new assumptions the developer has made in the new re-write will suddenly stop being relevant anyways. Net negative again.

The other common reason of the rewrite is the stack. “We’re using the wrong stack.” The reason is usually of the order of “Python doesn’t support async functions, javascript does - let’s move to javascript”. What’s missing in this conversation is there are several other things that javascript might not be able to provide. So anyway you’re going to have to deal with shortcomings of a programming language, by not changing the stack you’re atleast not wasting time in rewriting code.

Usually the language you use doesn’t matter. The tooling on most languages is pretty mature and as long as you’re picking one of them, I don’t think there is a problem at all.

PHP? Perfectly cool with that.


I’m a big fan of monoliths. I think they get shit done.

Modularization (and still worse, Microservices) are a product managers tool, not an engineer’s tool. It’s a way to split up work so that people stop fighting over “Don’t mess with the beautiful code I’ve written” issues.

The truth is modularization just introduces more bugs. You’re trying to write code that can be used in multiple places. There are two problems with that.

  1. Many engineers go overboard with the modularization and modularize code that’s actually used only once. But at the same time adding all the engineering overhead just putting the code where it needs to be.
  2. Introduces bugs. Now you have the same code that’s used in 3 places. But one of those places needs some change. So you go change the modular code. Now that will likely break something in another part of the code. Unit tests are supposed to catch this - but you wrote shitty test cases, remember?

I think it’s just better to just copy paste the damn code. IDEs are now advanced enough to make monoliths readable.

The only exception to this is probably UI components that are used dozens of time in the code.

Don’t repeat yourself → Don’t repeat yourself dozens of times

Subscribe to Madhavan Malolan
Receive the latest updates directly to your inbox.
This entry has been permanently stored onchain and signed by its creator.