Objective project management tools are something we all look for. However, too often the move to become fully objective results in a loss of insights while we are trying to figure out how to become analytical. Here are a few “gut” insights that have helped us avoid this problem.
I am a great advocate of being objective when managing people, projects or organizations. However, my central theme is always that “you are your best project management tool.” This is a reminder that we humans, as tool users, need to always keep control of the tool.
In “7 Steps To Better Decisions” Information Week, May 3, 2010, Gary Smith says of analytic methodologies (emphasis added):
“Decisions made based solely on intuition, gut feelings, and years of experience, while valuable, tend to be less effective than scientific methods.”
I love analytics, I’ve been doing it for many years before the word “analytics” came into popularity. Yet I’ve seen many methods (not just analytics) – perfectly good ideas – that just didn’t seem to work well in practice. Gary goes on to give seven steps for success at implementing analytics:
1. Define the Problem
2. Identify relevant factors
3. Focus on data collection and preparation
4. Model the solution
5. Report the results
6. Implement the decision
7. Follow up
This is where, while I still agree with Gary, I start to get that uncomfortable feeling. This list reminds me of approaches such as Six Sigma, that are very prescriptive, where the notion is that good decisions will now just pop out at the end of the process. Our experience is that it just doesn’t happen that way.
I’ve describe the challenge as the “diet dilemma” where we get so caught up in following the methodology that we lose track of what we are trying to achieve. In the diet dilemma, there are dozens if not hundreds of diet plans and approaches available, but the bottom line is we need to take in less calories than we burn. If we do that, the net effect is we will lose weight. No, it is not easy for many people, but we know that it works if we do it right. We understand, at a “gut” level, the underlying approach.
In using analytics (or even simple averages, which I advocate as a place to start) what we should be doing is “training our gut.” Gary doesn’t put much stock in the “gut” and in Project Management or Death by Detail I relate in one example how folks who lived through project after project, still could not observe nor admit to the patterns that led them to late project deliveries. This same organization had implemented a detailed analysis process for new customer requirements to achieve a rigorous and accurate estimate of project costs and schedule. One of the biggest companies in management consulting had helped them implement their approach. They did not succeed at achieving better estimates nor on time delivery.
What finally broke the pattern of late projects was to fall back on simple averages and to use them. Here we noticed that it was taking nine months typically to get new features to customers. This was in contrast to the shorter durations from the analytic approach that came out of the development department. They key here was we used simple averages to help train everyone to know how long projects had taken in the past. People did not always like the estimates, but they understood them. We used our analytics to fully understand our experience (i.e., “train our gut”), NOT to substitute for our judgment.
What this suggests is that if we don’t understand the results, because they are too analytic or too complex, then the probability of the results being useful is much less. It still takes people, understanding the solution, to implement it. So to Gary’s list above, I would add “4.5 Understand the solution – make sure we truly believe it is a solution that could work.”
We had a Six Sigma project that intended to tell us when our complex electronics product was ready to ship to the customer. It analyzed existing defects in products about to ship, correlated them with product returns from past products, and attempted to predict how many product returns we would have if we released the product now. It had correlations and all sorts of impressive charts. It was an interesting idea, but it never got used. It was never used because no one could understand it nor the numbers it produced. But it sure sounded like it should have told us what we wanted to know.
What worked instead was again something much simpler. We looked at previous products and noticed the typical number of incoming defects that existed when customers accepted our products. It turned out that when the total defects being reported dropped to under about a 100 a month (all error types from worldwide testing, many of them duplicate reports of the same issue) is when the product generally got accepted by the customer. This understandable average gave a clearer indicator than the more intricate and mathematically intensive Six Sigma based analysis. It also emphasized what we needed to achieve before we could expect the customer to accept the product (and before we would want to allow a customer to accept a product).
Another example was the use of Monte Carlo simulations to estimate schedules. A great idea but in a Fortune 50 company I worked for it was clear that no one in management understood the methodology or how it worked. The pressure for a schedule marketing wanted, rather than a realistic schedule, worked its way into the simulation inputs. All products with schedules estimated using this new and improved Monte Carlo method were late to market. They averaged the same three to four month slips as did the previous products. Our final solution, that worked, was to simply assign the average product schedule actually observed in the recent past to the current products (i.e., used an average schedule). We then delivered on time, with good quality, and just about all follow on products also achieved on time delivery (in equal part due to increased quality from the initial products not being developed using compressed schedules).
The key to this success was that the analytics done were not overly complex and they were easy to understand (see also project management tools on steroids). We knew it had a good chance of working for the simple reason we understood where it came from. It also helped that during the project, since we understood how the estimates were made and what they were based upon, it was fairly easy to make decisions when things went off the track.
To make any analytic or objective approach work well, we found it worked best to use the results to train our guts and to help us understand the difference between what was normal and what was not normal. Analytics were not used to replace our judgment. It took decades before a computer was able to master Chess well enough to beat the best human players. It should not be hard to believe that it will be a very long time before an analytic approach to project or business management will replace human intuition, decisions, and judgment. If we as humans don’t understand the results or the underlying methodology it will be hard for us to manage effectively using the results.
Analytics and objective data based management are a great way to improve the management of our projects or organizations. However, use the analysis to train our “gut” and not to replace our judgment. If we don’t understand the results and where they come from, then we won’t be as successful at using the analysis in our decision making. Often, during the initial transition to analytic decision making, the results will challenge our conventional judgment. Our experience, now trained by the use of analytics, should tell us if these results make sense or not. If not, ensure we truly understand what the analytics are saying and why, before making a decision based upon them.