Things must have not been going too well on our project. The VP of Quality and the VP of Test both showed up at one of our project meetings and wanted to know if all of our defects had been prioritized correctly.
I told them that the review process was ongoing, as usual.
They didn’t like our process nor my answer.
The problem they saw was I was taking too much time and not immediately reprioritizing the defects.
They were sure of this because they were not seeing the spike in defects that occurred as the quality review team reviewed and increased the defect priorities. The periodic spikes were taken as indicators of Quality doing its job — identifying quality problems. The Test VP also liked it because higher priority defects showed that his team was finding important issues that needed fixing.
They were mad.
The Quality Team Reviewed Our Defect Prioritization
They told me my job was to immediately implement the new priorities.
I responded that in the previous product (which shipped on time with good quality) we always first reviewed the recommended changes. This they clearly took as rank insubordination (see initiative or insubordination for more on this notion).
Humorously, seeing that I was going to lose this one, during this conversation I messaged my team and told them to just apply all the new priorities without a review. Simultaneously the VP of Quality was messaging my Product VP. At about the same time my staff confirmed all the priorities were updated, I got a message from my VP telling me to “just do it.”
I informed the VPs that all defects were now prioritized per their recommendations.
We had a real time defect tracking system, so they immediately pulled up the trends to see the impact.
There was no change in the priority defect trend.
The expected spike did not stand out in the normal ups and downs of the overall trend.
So what happened?
The Quality Reviews Had No Measurable Impact
What happened was the normal process disposed of the defects, at the original priority, faster than the reviewing of the priorities. This highlighted a simple truth. The priority changes that were recommended made no difference as to how fast defects were disposed of nor of which ones got worked on or not. The only thing it did was to introduce a periodic spike in the trend, based upon the batches of new priorities the review team would set.
One review team member admitted in confidence that the team had agreed to always increase some defect priorities at each daily review. This was to show management that they were doing the job they were given.
I had at one time, again humorously, suggested that we should just write a script that randomly upgraded priorities and this would save hundreds of staff hours. I don’t think anyone was amused.
The Quality Team Just Believed That Quality Was Not Good
Why didn’t the VPs of Quality and Test see the data as we did? This is always hard to analyze, but in a nutshell those organizations had a belief that the development and product teams were “tweaking” the reported defects to claim quality was better than what quality and test found. They saw the periodic defect spikes after a review as evidence of this.
The fact that the data never showed any increase in how fast or how many defects were fixed, even when their priorities were updated fast enough to cause a noticeable spike, was never addressed. As long as the review caused periodic spikes in the trend, Quality felt this showed they were contributing.
Quality Had No Effective Measure Of Quality
By the way, this product shipped on time and even got recognition from our field test team (another test organization – see the best test team) for the best quality product the team had ever seen.
Quality is not always self evident, especially on large projects or products. We need to have clear and objective methods of measuring quality. Too often what passes for quality checks can be driven more by emotion and habit than by an objective measure. Knowing our project or product quality requires hard data, not just beliefs.
Are the quality measures on your project objective and do they truly measure quality?