Why Project Management Metrics Often Fail and How Yours Can Succeed
“Zuckerberg will need a new hack for that. That’s one of the reasons he’s back to coding every day. Colleagues say he wants to immerse himself in the daily lives of his underlings. If he’s going to keep inventing new ways to keep users coming back ….” Bloomberg Businessweek, May 27th, 2012, “How Zuck Hacked The Valley”
How many times have we measured something and it came out to not mean much of anything to anyone? Well this happens a lot from where I’ve been. Often the problem with these kind of numbers is that we can’t tell what they mean just by looking at them for the first time. At least not for a lot of them.
We have to take time and get familiar with what the numbers are showing. If for example we want to know how long it takes to drive to work each day, then we track the start and stop time of our driving. This will tell us something about how long it takes. But because we’ve also actually experienced the activity, it has a lot more meaning for us. (Compare to “pay attention” where if we don’t pay attention, we don’t continue to learn.)
This “gut feel” (or intuition) is where metrics start to make a difference. If we don’t develop this gut, the intellectual interpretations of the charts or trends don’t mean as much. Sure if it is costing us $5 to make a gadget that we can only sell for $4, then we can readily see the issue and take an action.
The problem with too many metrics/data capture/mining efforts is they produce ideal views, something we think we want to see, rather than realistic views of something we already know something about. The key to effective metrics is to already know something about what we are measuring and to ask a few questions.
For me, one major breakthrough came when I constantly heard managers saying we’ll finished the project next week, but we seemed to never be able to do it. I asked the question “how long is is really taking to fix a problem holding up a product launch?” It turned out that the average was significantly longer than the spoken word indicated. We generally would say “we’ll fix those problems that just got reported by the end of the week” but the real data showed that to fix just about all the current issues was, for example, taking six weeks.
The amazing part was how far off our experts were. One of the reasons I concluded why they could get so far off is that before they fixed all the issues in one batch, a whole new batch of issues came in a day or two later, and they began to focus on those and say the same things “we’ll fix them by the end of the week.” The environment and situation never allowed them to see how long defects were taking to fix from beginning to end. They just worked on what was in front of them.
This resulted in some very talented and capable folks not being able to tell how long something took to fix, but they would swear on all their experience that it would only take three days. It is not surprising then that when we estimated how long a project or task would take we always significantly underestimated the time and resources needed.
Like Facebook’s Mark Zuckerburg, immerse ourselves in what is going on, including the data that reflects what is going on. This will help to ensure that we truly understand any metrics we use and this will in-turn help propel our projects to on time with good quality.
What metrics have been the most useful to your project and why?
2 thoughts on “Why Project Management Metrics Often Fail and How Yours Can Succeed”
Comments are closed.
More comments from around the web:
James Heires, PMP • Bruce,
Intuition is one thing – factual data and analysis of it is something quite different, as you point out in your piece. In your example, if the PM simply looked at the rate of change of defects on a periodic basis (# of new vs. # of resolved), it wouldn’t take any special understanding to conclude how long it would take to resolve all of the defects.
Bruce Benson • James, Good point One other factor that was handy was tracking the incoming defect rate over time. It was a simple bell shape (rise, peak, fall over 6-8 months). Regardless of how fast we fixed defects, until that incoming rate fell to a low number, our speed of defect repair and the size of the backlog “today” didn’t tell us very much. The advantage was that once we knew these things, it uncovered the additional characteristic of a “defect arrival curve” that no one ever noticed before. Until we did this, our focus was always on the “current backlog” and our frustration was that every time we reduced the backlog, we would get more defects (often from too quickly fixing existing issues and hence causing more!).
A Linkedin discussion asked “How do you know you’re measuring the right things?” My two cents were:
First, understand the business (or process) and hence what we are measuring. Mark Zuckerberg is reported to be going back to “coding everyday” so he is more in touch with his troops and hence can make better decisions and be more innovative. For more on this see: pmtoolsthatwork.com/why-project-management-metrics-often-fail-and-how-yours-can-succeed/
2nd, simple often works much better than complex. Great quote from SD Times: “Simple algorithms on big data outperform complex models.“ SD Times, June 2012, Big Data.
Finally, when it actually makes a difference. Often, measurements are an experiment, so do the experiment and see if they help. If not, use another metric or refine the one we got, then try it again. If it is not in some way predictive or otherwise useful to make daily decision based upon it, then it is probably not doing what we need it to do (lots of metrics I’ve seen are like this – essentially just eye-candy and noise).