tdd-deciphered.com

created by @parsingphase

Up: Index
Next: Addenda

Chapter 18: Verifying code coverage

Having written our code “test-first”, we can expect that it’s well tested and thereby protected against changes, but can we quantify this? Fortunately so, as PHPUnit comes will some extremely useful tools to do just that, able to quantify both the coverage and “change protection” that we’ve achieved.

To use them, all we have to do is add another parameter to our call to phpunit:

wechsler@tahr:~/host/repos/enigma-simulator$ ./vendor/bin/phpunit --color --coverage-html=docs/coverage tests

You may notice that this runs a little more slowly than it did before as it analyses exactly what code is being exercised by the tests. It may also complain that you’ve not got xdebug installed; if so, install it via your package manager (eg sudo apt-get install php5-xdebug on ubuntu) or consult http://xdebug.org/docs/install. Once the command has successfully run, we can look at the file docs/coverage/index.html to see how our coverage looks.

Run against the code as seen at the end of the last chapter, the report looks much like this:

Code Coverage Summary

At first glance, the amount of red (well, pink) might seem a little alarming. We’ve written a project by TDD and it’s reporting 0% coverage on all classes; that just seems wrong.

The reason is that PHPUnit’s definition of coverage is quite strict, for good reason. A single untested line in a method means that that method gets classed as “not (fully) covered”, and a single method without full coverage will have the same effect on the class. So, for now, let’s focus on the fact that most of our files are 90%-plus covered, and see how we can raise that, and thereby the methods and classes, to 100% testing.

The first thing we need to know is, which lines aren’t tested yet? Clicking on the classname of “Rotor.php” (our most-tested file) and we see a breakdown:

Rotor.php Coverage Report

This looks more promising, with only one method reporting an issue. If we click / scroll down the file to that method, we see:

Untested line

Somehow we’re not testing that we throw an exception if we try to encipher a non-character. But we thought that \Phase\Enigma\RotorTest::testEncryptBadCharacter() did that:

/**
 * Try and encrypt something invalid
 * @dataProvider badCharacterIDataProvider
 * @expectedException \InvalidArgumentException
 */
public function testEncryptBadCharacter($characters)
{
    $rotor = new Rotor();
    $coreMapping = [
        'A'=>'E','B'=>'K','C'=>'M','D'=>'F','E'=>'L','F'=>'G','G'=>'D',
        'H'=>'Q','I'=>'V','J'=>'Z','K'=>'N','L'=>'T','M'=>'O','N'=>'W',
        'O'=>'Y','P'=>'H','Q'=>'X','R'=>'U','S'=>'S','T'=>'P','U'=>'A',
        'V'=>'I','W'=>'B','X'=>'R','Y'=>'C','Z'=>'J'
    ];
    $offset = 0;
    $rotor->setRingOffset($offset);
    $rotor->setCoreMapping($coreMapping);

    // and this should cause the exception:
    $rotor->getOutputCharacterForInputCharacter($characters);
}

It turns out there’s a small fault in the test. $rotor->setRingOffset(0); is setting an illegal offset, and it’s that which is throwing the InvalidArgumentException we were looking for, so we’re getting the right result for the wrong reasons.

If we revise the culpable line to $offset=1, and re-run our tests, we find not only that all our tests still run, but also that our Rotor.php coverage report now shows 100%. If we re-load our docs/coverage/index.html file we now see:

Code Coverage Summary with Rotor at 100%

We’re now told that in Rotor.php, all of our code lines are tested, as are all of the methods and all of the classes (i.e., one) in that file. Not a bad start for a one-character change, although I do have to admit that, however neat a story it makes, this error was not intentional. It does, however, highlight the importance of checking exactly what’s been tested.

I won’t document every change we need to make to get to 100% test coverage; that would probably get dull quite fast. Instead, while we’ve got a suitably mostly-tested codebase, let’s see what else the coverage report is telling us.

The Dashboard link is well worth a look:

Code Coverage Dashboard

Note that there are more charts below this, but these are enough to explain what we’re being told.

The left-hand chart is essentially the coverage report on the index page in chart form, but the right-hand side is telling us something new, giving us a chart of “Cyclomatic Complexity” against code coverage, and a CRAP rating for a couple of classes it rates as “Project Risks”. The deliberately unfortunate backronym of “CRAP” (Change Risk Anti-Pattern) tells us that, mathematically speaking, these classes are inadequately tested for the degree of complexity that they hold.

Roughly speaking, Cyclomatic Complexity represents the number of different possible paths through code; see http://en.wikipedia.org/wiki/Cyclomatic_complexity for details. The CRAP rating combines that with the code coverage to analyse the degree of risk; see http://googletesting.blogspot.co.uk/2011/02/this-code-is-crap.html for the actual formula. We can use this to focus on reducing the risk inherent in our most vulnerable classes, either by reducing cyclomatic complexity, or by improving our tests.

It turns out that the complexity of our code is actually pretty reasonable for what we’re trying to achieve, so the best approach is to add tests. As noted above, I’m not going to document every step of that process, but through the magic of television, we can fast-forward to the point after I finished that process. Check out tag Chapter18-1-FullyTested, and you can build a more pleasing coverage report:

Code Coverage Summary showing 100% coverage

(Note: I’ve done some light refactoring here, suitably protected by regression tests, hence the appearance of RotaryAlphaNumericTrait).