Improving quality can have a surprisingly stressful impact on your project testing organization. A good set of quality metrics can help keep your improvements on track as your organization adjusts to the new reality.
I was the new Director of Software Development and we were sitting in a staff meeting discussing defects that were in software delivered to the customer. Everyone was hammering on the Test Director for letting defects out to our customer. I listened for a few minutes and then jumped in. I explained that if we had so many defects, that significant defects could get by testing, then the problem was with development and not with the testing. As I said this the Test Director seemed to look at me with a mixture of thanks and disbelief. Here I was saying the problem was mine. Everyone looked at me in shock. Then, without missing more than one beat, they immediately all started to beat up on development. They just needed someone to blame. They didn’t really care who it was.
Over the next few calendar quarters, we got to the point of consistent on time releases of software with very good quality. Defect repair quality was great. While we did have defects, when we did fix a defect, it stayed fixed. In most of the other organizations I ‘ve worked, fixing a defect often generated more defects. In this development organization a defect fix was nearly bullet-proof.
Early on, I had agreed with the Test Director on a set of quality metrics to track our progress (this I learned to do from the Worst Test Organization). I had convinced him that I didn’t want to track and report the quality rate. I wanted him to do it so that I had an independent department reporting on how well development was doing. We then agreed on a core set of metrics, though the Test Director then added a dozen more of his own (which is a story for another time). It went down hill from there.
The Test Director apparently came to the conclusion that my acceptance of responsibility for the software quality was, in part, a grab for resources. Here we had another instance where the part of the organization that had most of the visible problems got most of the attention and resources (see Boring Projects). While that was bad enough in his eyes, the fact that our software quality improved made matters worse.
The test organization’s response to the increase in quality was to increase the number of defects reported. The policy that went out to the testers was to minimize time doing test analysis before reporting a defect. In fact the apparent statement was “Just report it if it might be a defect. Let development figure out if it is really is one or not.” This resulted in an increase in questionable defect reports.
My development managers, on their own initiative, decided to just immediately reject any report that didn’t have sufficient evidence of a defect. So for a little while we had the test team entering lots of defects (it takes time to fill them out) and development rapidly rejecting them (they were quick and easy to reject). The Test Director cornered me in the hall one day and proposed a new rule. The new rule was that we needed to have a two person agreement, one from test and one from development, to reject a defect. I agreed, but with the stipulation that it also took a two person agreement to enter a new defect, one from test and one from development. The Test Director stalked off and we never did implement those new rules.
This particular episode culminated in what I now recognize to be a classic result for a stressed test organization. The Test Director put out a report that was headlined “The Worst Quality Release Ever” and sent it to everyone on the senior staff. As I read it, I was surprised I didn’t have to unscrew myself from the ceiling. It was so bizarre that it was hard to even get mad. However, I had had the experience in a previous job of how such a report could totally confuse everyone. Plus, the sad fact was that claiming to the contrary — that quality had increased significantly — would just not be a credible thing to do in this company.
Luckily, we had the quality metrics that we had agreed upon. The test organization, probably without fully understanding them, had done a good job at reporting them on a regular basis. This gave me an objective position supplied by the test organization on which to gently question the test report.
It was like playing twenty questions. On the same e-mail thread as the original test report, I asked — innocently enough — a series of basic questions back to the Test Director about the numbers he had used. You see, his evidence of poor quality was that he, not finding many issues, asked his team to test anything and everything that they could. In fact, one of his test managers told me that they no longer spent much time testing our changes because they knew they would not find many issues. So instead they tested parts of the software that had not been changed. It turned out that they refound many existing issues, but that it took time to determine that these new issues were the same as existing issues. The customers had prioritized what issues they wanted fixed and had not included these in the current release.
For more on challenges with reporting defects see An Easy Fix To Some Of Our Biggest Problems
The test team also found many obscure issues, never reported by the customer, that had been in the software for years. These were good to find, but they were again existing issues that needed to go into the “to be fixed” list to be prioritized by the customer. Once these two groups of defects were considered, the actual defect rate on the changes we had made was very low. In fact, the lowest we’ve ever had by any measure. This was not the worst release ever, it was our best quality release ever. This was, in fact, the statement issued by our COO at the next quarterly company wide meeting.
We had made great progress on software quality, and we had some great independent measures that made it clear how well software quality had improved. However, the improvement in quality caused great stress in the Test Department whose size, power and relevance to the company was based upon having a steady flow of software defects.
How does your test or quality organization respond when your project’s quality improves?