We had a class of defects that were known as “duplicates.” This was the case where we had teams worldwide testing the product and they often reported the same issue multiple times or just as often, defects that traced back to a known problem. At a point in the lifecycle of the project, the duplicate reports would approach 70% of all the defects reported by testing. This meant that if we had 100 defects reported in a week, then once we sorted them all out, only about 30 would be truly new defects that were not known before.
First off, I have to mention that I was the first to compute and use the metrics on defect duplicates in this organization. So, while personally proud of this insight of mine, it came to be used as a political hot potato. You see, if the test teams were reporting 70% duplicates, clearly — some argued — the test teams were doing a poor job of finding new defects and clearly they were wasting development’s time by “bombarding” them with all these “useless” reports. The development senior management regularly chastised testing for this poor performance and made claims that this was hampering development’s productivity as they had to sort through all this “noise.”
Here, we had a Pyrrhic victory of sorts in this otherwise successful project. The most senior managers of development got our most senior managers of project and product management excited about this duplicate reporting problem. We did things like send out people worldwide to work with the testers and help them sort out what was a new defect versus what was a duplicated defect.
After weeks of tracking this effort, there was a week when the duplicate reports dropped noticeably. Everyone declared victory. The development managers were able to say “we told you so” and the product managers were able to say “see, we responded and we fixed it.” Everyone then joined hands and sang Kumbaya to even more senior management who had been watching this “problem.” Then we forgot about duplicate defects.
Compare with How To Tell When Something Is Really A Problem
So, what really happened?
When we looked back at the moving averages for duplicates (which helps smooth out the normal variation and made the trends more visible) there was no real change in the overall trend of duplicates reported. Once the project was completed and shipped, and we computed the overall rates and trends over time of duplicate defects, they were essentially the same as in previous projects.
All that noise and expense and finger pointing, did not make any difference — whatsoever. None. It did show up as a “major accomplishment” on PowerPoint slides later on in a presentation. I didn’t say anything. I did send my own performance appraisal and did show that there was no change, but did not make a big issue of it, as the project was a huge success as to getting the product delivered on time with good quality.
Our biggest problem and its solution was really no problem at all, but it certainly kept more senior management busy for months.
Have you had any “huge problems” that occupied a lot of your time but were never really significant problems at all?