It was a high level meeting. Lots of VPs. It included our Corporate Chief Quality Officer. He was a big man with a booming voice who was very intimidating to the other VPs.
The CQO wanted to know how we were doing at driving the defects down in our current project. This project was our premier product. It was going to replace our current product that had changed the industry. It was important.
I was one of the – many – managers on the project. I was also the expert on defects. I even had an award to prove it!
The question from the CQO was how long was it taking to review each defect. This review was where we again looked at all new incoming defects and double checked their priority. We had hundreds of open defects with hundreds being reported each week. My team’s job was to run a process that ensured all the defects were prioritized appropriately. The Quality department also saw it as their job to do the same with a parallel priority double-check meeting.
I said something that would cause an uproar. It was also something that all the VPs would immediately deny. I said we should average about one minute to review each defect.
Boy was I in trouble. The VPs, in front of the CQO, wanted to ensure that the picture painted sounded complete and thorough. No way it could only take about one minute, they said. They’ve sat in on many reviews, and it was taking much longer than that.
We were both correct. The current process in fact was taking longer than one minute per defect. We had hundreds of defects being reported each week. But, by the time we reviewed all the defects, the vast majority were already fixed by the development teams. The quality review team, thinking they had to understand a lot about hundreds of defects, would often defer making a decision and queue up questions to the testers, the developers, the account team, the requirements folks, etc. This whole process regularly exceeded the time it took to just fix the defect.
So why did I say one minute? Because when I personally ran an equivalent process we averaged one minute per defect (see Meeting Madness for details). We took a quick look at it and decided, based upon a set of agreed rules, if the priority was probably correct or needed to be bumped up or bumped down (see also business rules for more on this approach). If we weren’t sure, we just left it and moved on. This allowed us to catch all the obviously missed-prioritized defects (for Agile folks, think speed planning poker).
Why did this work? First off, I was the defect expert. I know, because the Director of the Project Management Office gave me an award that said I knew more than anyone else about defects in the company! We had hundreds of defects weekly, so how could I know more than the 2000 people in the development organization? I didn’t, of course. What I knew, for example, was that the difference in time to fix priority two and priority three defects was indistinguishable. Similarly, taking five to ten minutes to discuss if a defect should be priority three or four was meaningless and then deferring a decision with questions back to multiple people would often take days for an answer. The data showed that one minute per defect made a difference and got the job done. Beyond that, it added no value, because the delay accumulated with each defect discussed and made the process ineffectively slow (i.e, the priority was changed after the defect was already closed).
You should understand, the reason we were reviewing defect priorities was the belief that some may be misprioritized and that this could result in a severe problem not being fixed and would prevent us from shipping our product on time. There was no such case ever documented of a known defects that got worked on at a low priority and so blocked a shipment. There were however many cases of previously unknown defects that blocked shipments and even “fixed” defects that broke things that then blocked shipments. The quality assurance process did not address these issues. The QA review simply duplicated the existing defect prioritization process.
Ultimately, we shipped this product on time and with good quality — the first time in the collective memory of the organization for shipping a brand new product in this class. Our biggest European customer spent considerable time pointing out that we had finally done this. In addition, our team received an award for setting a new standard for quality. This was considered one of our best quality products by our customers, yet the QA reports gave no hint of this before the product shipped.
Our quality assurance should avoid duplicating existing processes as a method of trying to improve quality. They should strive to objectively highlight the quality of both the process and the product. Our QA reports should ideally provide indicators that are predictive of what the quality will be when it gets to our customers.
Are your quality efforts adding value or are they just duplicating processes that already exist?