Back in 2014 I wrote a blog post listing three mistakes often made by folks who are new to test-driven development (TDD). The three mistakes I identified are:
Starting with error cases or null cases.
Writing tests for invented requirements.
Writing a dozen or more lines of code to get to GREEN.
It was a very long post, so I’ve taken the three parts and expanded each into its own article, also incorporating the comments I received in 2014. This is part 2, and will deal with writing tests for invented requirements…
Imagine you and I have to test-drive the design of an object that can count the number of occurrences of each word in a string. We’re doing TDD, but we have no code to test; we have nothing to hang our first test on, so we need to invent something, fast!
Luckily, our pre-test thinking tells us that our solution will decompose into certain pieces that do certain things, and so we begin by testing one of those and building upwards from there. For example, in the case of the word counter we may reason along the following lines:
YOU: "We know we'll need to split the string into words”
ME: “So we’ll need a method that can do the splitting”
YOU: “Great. How shall we test it?”
ME: “The simplest thing would be to count the words after the split?”
And so we write this as our first test:
What’s wrong with this? Well, no-one asked us to write a method that counts the words, so we're wasting the Customer's time. We might be able to ship real business value to our users sooner if we did something else instead.
Equally bad, we've invented new scope: a new requirement on our object's API, and locked it in place with a regression test. If this test breaks at some time in the future, how will someone looking at this code in a few months' time cope with that? A test is failing, but how do they know that it's only a “scaffolding” test, and should have been deleted long ago?
Working this way means that we often end up with code in which it’s difficult to tell what is necessary — ie. requested by our Customer — and what was invented purely for the sake of following the TDD mantra.
Sometimes it can be pragmatic to write “scaffolding” tests such as this, perhaps as a discovery exercise to get an idea of how it feels to work with a particular design. And if we do, it’s important to remember to go back and delete them as soon as possible.
Because even if this test never fails, its very existence makes it appear that countWords
is a requirement. We’ll need to spend some time checking with someone before we can change this API or delete it.
So start at the outside, by writing tests for things that your code’s consumer (eg. the end user) actually asked for.
Next time: taking a slice that’s too big.
Things to try
Look around your codebase for tests like this. Delete as many as you can.
Next time you have to TDD something that doesn’t exist yet, start with a thin slice of genuine business value, and do as little design as you can. No, less than that!