Count the defects daily – the ones that are part of the project work load. The number goes up and down during the cycle – why and what can you learn? Compare a couple of projects and you have an image, that you can use to visualize what is usually happening in you projects. I usually have the image of a camel…
Previously the Testing Planet looked at A Little Track History That Goes a Long Way – how to do a simple tracking of test progress during test execution. I keep coming back to this technique for daily testcase execution status reporting for both testcases and defects.
One source of information is the number of active defects – how many do we currently have in play to resolve? The total number of defects is probably not significant, as we can always find one more and one more and the count will always be rising. I see no correlation between the number of testcases and defects found on a day to day basis.
In the article in Testing Planet 5 we looked at the trend of active defects. IT was based on defect data from eight programs of about seven projects each in the Telco domain. The graph in context looked like this (it’s not the exact numbers of the projects, but sufficiently close).
Recently I started tracking the active defects in the Life Science project – a seemingly completely different context. I was curious to see what the trends would be. There had been a hump in the active defects, and it was looking very much like the curve for “project C” as I drew it a long time ago. What happened at 5 days before launch was very interesting – the number of active defects went up – just like the trend I was usually seeing!
I still wonder why there is such an oscillation – is it a matter of too little throughput in fixing defects or a matter of testing after a few days having uncovered the first level of what the system is capable of? Again the actual number is irrelevant. The trend curve – and how you use this to guide your testing is.
Comparing Camels and Dromedary
Like Matt Heusser and Troy Maggenis in Yesterday’s Weather – I could see that a little statistics go a long way. Unless something is completely catastrophic is happening, you usually have about the same weather like yesterday. In context – as there will be seasons and locations like “Christmas sale” and “exploratory testing” where the camel will look completely different.
There are some dangers in comparing two different contexts. All camels have one hump – dromedary have only 1 and (Bactrian) camels have 2 [reference 3]. In both the Telco and Life Science contexts there is a management assumption that test case progress is important to the earned value of the testing activity.
Usually such KPI’s and metrics are second order measurement used to determine how it is going to go (according to plan). It’s a common misunderstanding that we know how the future will be with regards to the test execution. At worst it is the same weather as yesterday; at best it’s better. Either way the explicit knowledge of the past progress can help us to explain the trends in the current context.
About the Author
Jesper Lindholt Ottosen is promoting Rapid Software Testing and Context-Driven Testing in Denmark. He is a test Manager of NNIT, previously a senior test manager in CSC and other companies. You can follow him on twitter at @jlottosen and on his blog: Complexity is a Matter of Perspective jlottosen.wordpress.com.