Sunday, 14 November 2010

Testing and Tooling, I’ve got it covered

We’re currently going through the process of trying to get unit tests around a large legacy code base at work, most of which has been written in a rather test-proof manner. This means that we need to introduce seams as we work through different sections, and obviously this sort of code change can create new and subtle bugs if any small errors are made in the refactoring. As we’re adding the tests as we go human error can mean that a method may be modified without the safety harness of tests to watch it. Having seen one such bug get onto our live webservers and cause a few issues when used in anger, it occurred to me that we ought to get some code coverage running so that we can tell exactly what code is tested, and see what we’ve missed.

I’ve dabbled with NCover in the past, but the open source version is getting a bit long in the tooth now, and I had no budget to jump straight in with the commercial version, so I found PartCover and after a bit of tweaking found that I could get useful metrics out of it, even if the interface currently feels rather clunky. At the high level, code coverage figures are a nice thing to know, but tell you very little about what is being tested, especially at this sort of stage of a project where we are at fractions of a percent. However, the ability to dig into different assemblies, classes, and even diving into the code in methods to see which lines had been executed suddenly gives us a great insight into what is going on. We can now see if there are any sections of the class that we are refactoring that aren’t covered by tests, and make sure that we rectify the situation before hitting the real world, like that implementation of IEquatable<T> that had tests for both equal and unequal objects, but not null. Oops.

Such tooling is less important in the modules that we’ve been able to code using TDD, but even there it is handy to see if we’ve been a bit over zealous with writing production code. We’re not ready to push code coverage into our CI system yet as the high level statistics would give management a bigger stick to beat us with over the low number of tests that we currently have. However, as we get more and more tests in there, then we shall end up with it in the automated build for the extra reassurance that it gives us.

 

On a slight tangent, I had a lovely experience whilst working on one of the new classes that I have been able to develop with TDD recently. I’d written a bunch of tests and the code to make them pass, and then went through the cycle again. All of a sudden my test runner lit up like a christmas tree. I’d made some simple schoolboy error, like a off by one, or using < instead of > or some such faux pas. Before I’d had a chance to think about anything else the system was already telling me I’d screwed up. It’s a nice warm and fuzzy feeling to know you have that kind of security in place :)

No comments:

Post a Comment