This post is adapted from an e-mail I sent to a CEO who was looking for project management tools to help deliver his products when promised. It summarizes how in the past his organization had successfully been able to know precisely where their product readiness stood. The example is for software, but trending defects will work well in managing projects of all types.
Do you really want to know how your product is doing?
Look at your defect arrival rate curve. Not the curve or chart of the “not yet fixed” software defects. The count of defects that have not yet been fixed is just the current difference between the defects reported and the defects repaired. The count of defects that have not yet been fixed is fairly irrelevant if the rate at which new defects are coming in is high and shows no sign of slowing down.
Software defect reports rise, peak and then fall in generally a bell shaped curve:
Look at and count the defects on a monthly basis or 4 week rolling average (i.e., how many defects were reported in January, February, etc.). Not on an hour, day or week basis. Look at six months of data at least. The trend will be obvious. If not, the data being shown is suspect (e.g., “filtered” too much).
Software defect reports will RISE, peak and fall:
Your managers may try and “filter” the defects to show only the “critical” or “customer” or “will stop shipment” defects. In managing past products, I called this managing by the tip of the iceberg – but in fact this tip was well correlated with the hundreds and thousands of other defects that were also being worked. So filtering can be useful. Its primary use however was seemingly to give the more senior managers something to focus on and feel they are helping while the hundreds of engineers and line managers fix all the other issues. This significant jump in defect reports is a good indicator that testing has in fact started.
Software defect reports will rise, PEAK and fall:
Getting past the peak is a major milestone. Your leaders will show you the trend of the “important” software defects. Ask also to see the trend for all the software defects. All of them. This will help train your gut and will help keep them honest. Historically the defect arrival peak occurs just as everyone is testing (e.g., once system and operational testing start). So a key step is to get everyone testing. Yes, you will get a lot of defects being reported multiple times by multiple people. This is not a real problem, though many will tell you it is.
Software defect reports will rise, peak and FALL:
If the monthly trend is not falling, you are not yet half way through. The defect arrival rate does not disappear overnight. It takes months after the peak to get to a low and reasonable arrival rate. The significant defect arrival curve from beginning to end consistently lasted six to eight months (depending upon the product line) before acceptance by the customer. There is always a low level of defects coming in on any product. For these products, a “low level” was generally under 100 defects arriving in a month (all defects, not the “important” or “critical” or otherwise filtered subset).
The defect report trend was the single best indicator of where a product was in its lifecycle. I could glance at any product’s trend and immediately know how far along they were. It was the best predictor in the last half of the product development lifecycle for when the product would be accepted by the customer.
Every company and product line is different. However, tracking and understanding how your defects are detected and repaired over time is a tremendous project management tool. Simply count the number of defects that get submitted each period. Don’t try to “clean up” or “filter” the raw count in any significant way. Track this from the first defect reported through the introduction of the product and even until the product’s retirement. Once you have a few of these curves developed for recent past products, it will help you to better understand where your current product really is in its development cycle.