tdd-deciphered.com

created by @parsingphase

Chapter 2: Introducing Test-Driven Development

Test-driven development (or TDD), is a programming methodology that’s been around for a little over a decade now. The fundamental principle is that of defining what software needs to do by writing tests that confirm that it can do it. By writing tests for each unit of functionality before implementing it, we can guarantee that we’re always adding correct functionality to a system. Over time, these collected tests evolve from helping build the code to providing a sort of immune system, ensuring new changes never break what’s already present. Tests serving this purpose are generally referred to as “regression tests”, and can ultimately make the difference between proven code that you trust, and legacy code that you dread. You can usually tell whether you’ve written enough tests by how much confidence you have in your codebase.

Realistically, though, very few people develop every line of code by writing tests first. While doing so can provide an incredible degree of robustness, it can also feel artificial and fiddly. Others can look at this purist approach and decide that unit tests just aren’t for them. However, there's a vast benefit to be gained by getting even a few tests into your code, even if you don’t want to commit up-front to the full-on TDD approach.

So, if you’re not testing every line, when should you write tests? I’m going to go with the slightly over-general answer of “whenever you can”. Commonly, when you’re writing code to perform a specific task, the requirement is “It should give output A with input B”. This almost seems too simple to need testing, but it’s the ideal time to add a quick test in. When the next requirement comes along that “input C must produce output D”, add a test for that too. By the time someone gets around to coding input Y and output Z, you can be pretty sure that no-one in the development team will remember that A and B even existed, and by the time someone realises that something’s wrong, it will be impossible to work out what they should have been.

Another good time to add tests is when you’re fixing faults. This is particularly the case when you’re contributing to open source code. The creator’s use cases and yours will never be quite identical, and you’ll find faults that they won’t. If you can provide both a fix and tests to verify it you can guarantee that the behaviour you rely on will be preserved, and the project owner will have much greater confidence in accepting your fix. Conversely, as the creator of a project, if you’ve got tests covering your use cases, users fixing faults can add tests to verify their fixes. The growing collection of tests provide both a guarantee to current users and a contract to future ones.

Once you’ve added unit testing to your toolkit, it can prove useful in all sorts of ways. Problems that initially look too big to understand, let alone solve, can be broken down into small parts by solving each in turn. Code that’s just too slow, or memory-hungry, or dependant on outdated functionality, can be refactored with confidence, avoiding that day when it either cripples your project or company or needs a complete and expensive rewrite.

One caveat though is that tests are only useful if they’re regularly run. As we work through this project, we’ll look at ways to ensure that this happens without even having to think about it. To start with, however, we’ll run our tests by hand, to cut down our setup time and gain familiarity with what’s actually happening.