There is great comfort in trying to know every detail so that we as managers can make sure everything goes right. This desire for working out all the details, using project management tools, often results in detail overload that obscures the most important aspects of the project. Instead, there is an often simpler approach that works surprisingly well.
Many things in life follow the pattern that if A happens then B follows. This is how much of project management, and management in general, is structured. A tool like Microsoft Project uses a technique of linking tasks. It shows which task precede a given task and which task follows a given task. This often works well in many cases especially something fairly simple or something where for safety or security reasons it can be done in only one way. In research and development or software and product development, often things don’t happen in any predetermined detailed order.
For example, in our experience, the completion of developing chunks of software functionality are often pretty much unknown. We could not predict with any certainty which features would be finished first or by when. We could however, predict with a high degree of certainty when 95% of the features would be done (see avoid this trade off trap). The same was true of fixing defects. We could not predict very well when a particular defect would be fixed, but we could with a high degree of certainty predict when just about all (e.g. around 95%) of the defects would be fixed. (For more see knowing your project management average.)
This notion strikes many people as counter intuitive. Certainly once given a new product feature or an existing product defect we should be able to understand it so well that we can predict with great certainty when it would be finished. But if we have 250 new features or 2000 reported defects (both real examples), certainly it would be a challenge to know enough about all of them to know when they would be individually completed. The classic solution to this kind of complexity is to distribute out the task of estimating each and every defect or feature and then rolling up the results. The notion is that putting additional information against each item will improve the accuracy of any estimate.
Instead, we’ve found that rolling up this kind of detail often just increased the uncertainty. It added so much additional information of variable accuracy that it made it harder to see (and verify) how long something was going to take. Plus, it took almost forever to do and the earlier results tended to become obsolete with time. For some estimates, we could have completed the task in the time it took to make the formal estimate. Instead, we’ve found in many circumstances that when we just knew there were “50 new features” and knowing how long it had taken on average to develop features in the past, we could compute a simple estimate without knowing anything about the individual features. This turned out to be a better, more accurate, estimate then any detailed approached we had tried.
One example to help illustrate this is from the casino industry. The casino has no notion what a person is going to do when they walk in the casino door. They don’t know how much money the person has or what games they will play or how much they will win or lose. However, the casino knows that for every person who walks in the door the casino will make X dollars. How do they know this? They simply measure the number of people coming through the door and how much money the casino makes (or loses) each day. They do this over time to know the trend and variation.
The nice thing about trying this technique is you can continue doing what currently you are doing, and just measure the results. Estimate how things should go based upon the current trends. Check the results to see how well it compares to your current detailed planning approach. If there is no difference or the overall measurement approach is not as good then keep doing what you are doing. But, if you are using the latest and greatest approach in your industry, and you are still completing your projects late, then consider taking a step back and measuring the overall process.
I’ve given examples in previous articles, but I’ll summarize some of them here:
1. As the new Director of Development I was immediately challenged by an account manager to deliver software that had been promised. Not yet knowing anything about how we were doing, as I was too new, I asked my developement managers for any information on how long the last ten or so feature developments took. I was looking for the actual time it took, not the plan. The raw data showed it took us an average of nine months to deliver a new software feature. As we worked with the account manager, I discovered that the feature in question was promised in “a few months.” In fact, I found out that just about any feature we estimated came out to be a “few months.” So my response to how long it will take to deliver the feature in question was “third quarter of next year!” Boy, did that cause the account manager to explode. Even though this guy had lived through all those features that averaged nine months, he was still mentally attuned to the notion that it should only take “a few months.” The simple data we collected was a better estimate then what we were actually using. Here, the big problem was a cultural mindset of “it takes a few months.” Yet the final solution was not a detailed estimate, which had not succeeded in the past, but simple averages that showed how long things had taken. We aligned our product deliveries to quarterly releases and quickly got into a drumbeat of on time with increasing functionality that exceeded our customers expectations (which was admittedly pretty low!).
2. As a new product integration manager I discovered that our products were always three to four months late. Always. I looked at about every just completed product I could find the records on to see how products actually happened. One of the things I observed at the time was that the final stage of software debugging took an average of six months. This was clearly seen by looking at the defect data entered for all these products. It was a simple rise, peak and fall pattern of defect arrivals. It was a consistent six month curve. Yet, we would be within three months of the product deadline but the arrival rate curve had just started to go up, indicating six more months of debugging. I found myself often telling managers, who would insist that we will ship by next week, that defect arrival rates just don’t disappear. They need to rise, peak, and then go down. Even the “going down” portion was on order of three months. So if we had not peaked yet. I knew we were still at least three months from having a product ready to ship. We could predict this with great certainty, based upon previous projects, even though we knew little about the existing defects or what defects we would see in the future. (For more on this, see Defect Reports Are Your Best Friend.)
3. The last example on defects talked about what it looked like at the end of the product cycle. In contrast, my research of past products showed that the overall product delivery process took 18 months for a new product launch. Again, I looked at every previous product I could get my hands on to see how long things were really taking. It was no surprise to see that we generally got started at about 15 months prior to when we needed the product delivered to the customer. It was fairly unbelievable to folks that when they got a product approved that they had built in a three to four month slip in the schedule. The simple measurement showed that products were not being planned on time. Once we kicked off products on time (e.g., 18 months before we needed them) we consistently hit our milestones and deadlines, all with good quality.
Many projects can be managed effectively by simply working out all the details. If this works for you, keep doing it. However, if you find that projects are consistently not delivering when you need them or their quality is simply not good, consider trying something other than just getting more details. Try instead to step back and look at the overall trends of your projects and ensure your current project planning lines up with your demonstrated project delivery performance. Once you’ve gotten your promises to match your capabilities – by consistently delivering on time with good quality – then you are in a position to further improve the process by looking at the individual details.