Your test organization will often provide the first indicator that you’ve improved how you do business. How this shows up however, might be rather different than you would expect. Test organizations do not always handle it well when quality improves.
I read an article called When Do We Stop a Test? I was excited to see it until I realized that it was referring to individual tests, not to testing in general. I was hoping it talked about when quality was so good that testing became unnecessary.
Testing, I believe, is best thought of as an indicator of quality. Too often, however, it is used as a way to implement quality. Test the product until it is ready to ship to the customer or, in the Internet age, until it is ready to go live on the web. In my experience, testing should tell us how well we did in development. It should not be the method by which we finish development.
So what is it like in an organization that does not need to test? I don’t know. We never quite got that far. But we did increase quality so much that it stressed out many test organizations.
This is the first of three articles about test organizations that had to deal with the reality of significantly improving product quality. This first article covers the best and worst test organizations I’ve encountered. The second article deals with a test organization that could have been great. Article three explores the busiest test organization I’ve encountered.
The Best Test Organization I’ve Ever Encountered
In The Leap To The Exceptional I illustrate how an organization often will not need to make radical changes to move up to the next level of performance. In this example, the software was delivered on time – the first in the memory of the organization – and the quality was good. Six months into daily use of the software and the customers had not reported any issues. We had evidence of very good quality.
If the quality was so good, what was the test team doing? In this case, when the test team found very few issues, they decided to spend more time testing the performance of the system. This was a real-time satellite based system with a high bandwidth of incoming data. This national defense system had to take this data, make sense of it, and report in real-time any items of interest. The test team was able to spend considerable time focusing on how fast the system performed. In the past they had spent all their time on identifying software defects and testing the fixes to these defects.
This was my first introduction to a test organization and at the time I didn’t realize they were doing something unique. What was unique was their positive reaction to a change from working with buggy software to working with good quality software. Other test organizations would not all be so flexible.
The Worst Test Organization I’ve Ever Encountered
In Thriving on Defective Software we have an example of an organization that improved its software quality and processes, yet the organization was a more interesting place to work when we had lots of software defects. This was because the organization as a whole was centered on finding and fixing defects. Without significant defects, the majority of the organization seemed to think they didn’t have anything important to do.
The test department in this organization was particularly intriguing. Their general approach to testing was to simply sit down with the software, start casually using it, and report any problems that they found. Yes, they did have some general test plans, but in the past the software had been so bad that it didn’t require much more than basic use to find and report a significant number of issues. Life was good for testing.
The world changed for testing, however, over a period of two years. Fewer and fewer defects were found with each quarterly release.
So what did the test team do? Well, I found myself characterizing testing as similar to the 5 o’clock news: we get an hour of news if there is any news or not. In this case, we often got a thick printed test report, even when the test team could not find any errors.
So what was in this document? In one typical case, it had pages and pages of screen shots purporting to show why the user interface was poor. “The customer will not like the software” the report claimed. The user interface had to be changed, it said, because it was difficult to use and that constituted being a defect. Now, we were making few changes to the user experience. This user interface was the way it had been for years. Arguably, we were in need of an overhaul. However, the customer told us what they wanted to change and what they wanted fixed with each release. They had made no request for a significant change to the user interface.
What we needed to know was if the test team found any problems with the twenty changes we made at the customer’s request. Since testing could find no issues, which was partly because the quality was good and partly because testing was not methodical, we would expect the test report to be small. In fact, it grew with each quarterly release. As the quality of the release went up, the size of the test report documenting the issues also went up!
This should not be hard to deal with. Since no remaining defects were being found the releases should quickly be approved. No chance. You see, senior management relied on the test team to indicate the quality of the software. I found myself as the development manager sitting on one side of the general manager (GM) and the test manager sitting on the other side. I would say “testing found no defects, we are ready to ship.” The test manager would say “look at this test report, we have a lot of things we need to fix!”
The GM would sit quietly between us and just let us argue. His approach was to allow us argue until we came to an amicable agreement. When we were in agreement, he would approve the software for release. So long as the test manager waved around a thick test report, the software would not go out the door.
The humorous catch was we could not make any changes to the software that were not approved by the customer. This was based upon a five nation agreement. So all the changes outlined in the test report could not be made even if we had wanted. This kind of thing would go on for a week. The test organization would then finally say it was OK to release the software. On one occasion the GM caught me in the hall one day and said “glad you fixed those issues so quickly.” It would have been funny, if I had not been so frustrated.
This particular event taught me the need to find and regularly publish quality metrics (e.g. defect trend reports). In the future, I would ask test organizations to help define and publish quality metrics (if they did not already). This reduced the ability for someone to wave around a document, that management would never read, and be able to hold up software releases.
Our test organization might feel the stress when we increase quality. By the same token, if we believe we’ve increased quality but our test organizations seems to be doing business as usual, take a good look at what they are doing. Either they’ve adapted and are still providing value, or they are reporting questionable issues to try and keep relevant, or they’ve found no reason to change which would suggest our improvements in quality might not be as significant as we believe.
Next, Almost a Great Test Organization.