The Metrics Said We Reported The Same Defects Repeatedly
I was always mining project data for new insights on our performance and had found a new one during a particularly critical period. Here the test teams were struggling by finding the same defects over and over again (called simply “duplicates”) and we concluded that they were only duplicates because the development team had already found the problems first but had not yet been able to fix them. If the development team had been able to fix them quickly (or to avoid them in the first place), they would not have shown up as duplicates. It turned out that about 75% of all defects found were ultimately found by development first and only then by the test teams.
So finding “duplicates” was not only the test teams finding issues development had already discovered but was also a good indicator of development’s efficiency. If development did nothing more than fix the problems they found (or not insert them in the first place) this would have driven down the “duplicate rate” and the overall defects faster than any other activity.
The Real Issue Was Too Many Unfixed Defects
Development management however, wanted to focus on stopping the test teams from reporting these existing defects. This was fundamentally hard as it required the tester to have the insights of a software engineer and trace back the issue to the common root cause or “duplicate” defect — which they could rarely do. So unsurprisingly this effort to prevent reporting duplicates was never successful. This initiative by development was a typical red herring I’ve seen in project environments where the messenger (or metric) is shot because they are not perfect and because they reported the brutal facts which is hard to hear when projects are not going well.
The “fact” that development found 75% of their own defects and hence this is where we should probably be putting additional effort was hidden for years behind this metric called “duplicates.” While it took us awhile to “suddenly” understand what this metric was really telling us, the insight ultimately changed the priorities of where teams needed to spend the most time to improve the quality of the product.
The Solution Was To Prevent Or Fix Faster The Existing Defects
In this case, the root cause of the problem was low quality (too many defects, typically thousands) and not enough performance (not fixing defects found fast enough). Instead, the organization had focused on and punished the testing and quality organization for reporting “the same issue” over and over again. Here the test organization was doing a great job but were being punished for it by misguided management actions.
See for comparison what happens when quality improves but the test organization doesn’t
Do you have any project metrics that are probably not telling you what they are suppose to be telling you?