This was a question that came up in our regular technical team meeting down the pub the other day. I have been pretty evangelical about unit tests ever since the penny dropped about a year ago but it has to be said that I am struggling to stick to the high moral standards set by the 100% code-coverage purists.
You all know my grubby little excuses. Unless you live in some coding Utopia then time pressure, last minute changes that need to be deployed now, flashes of code-god inspiration that require input right now because it is just too exciting to wait etc. I admit it - my flesh is weak.
But should I be so hard on myself? Is 100% really worth aiming for? At what point does the law of diminishing returns kick in? Nobody I have spoken to seems to have any evidence (as opposed to anecdotal opinion) one way or the other. It boils down to unsupported assertions like "you should at least aim for > 95% code coverage".
I wonder if there is more efficient way to balance time spent on unit tests and the value those test return. Fighting entropy requires a lot of work so perhaps we reduce the work by breaking the problem into smaller pieces and create a hierarchy of code coverage.
- Public Interfaces
These should have unit tests created for each possible interaction with the interface members. There should be an iron rule that public interfaces have 100% code coverage.
- Black Box Code
The plumbing code that supports the public interface functionality will be covered by tests as required by TDD (test driven development) but new tests should only be written for new bug fixes and any obvious TDD development.
In a perfect world I accept that 100% is the ideal, but in a world where I need to make money for my company then 100% code coverage 100% of the time is surely too much.
100% coverage for selected, critical code and significantly less elsewhere might just be the way to go.