Why Project Managers Should Not Set Up Metrics To Measure Improvement Efforts
“Establish up front how you’re going to know if the [agile] adoption was successful. It could be productivity of the team, less software bugs or more customer satisfaction ….” Software Development Times, April 2012.
What is wrong with this picture? If the only time we want to know how well we are doing is when we make a change, then how do we know — objectively — we even need to make a change?
If we don’t have some kind of measures in place (e.g., productivity, quality, satisfaction, etc.) then that is the first challenge to overcome. Once we know in some objective form how well we are doing, then we can make changes and see — again objectively — what our impact is.
Some simple ways I’ve measured my own projects, included computing the average time to fix a reported defect (as well as the trend of the average time), comparing my milestone achievements (i.e., how long) to milestone achievements in past similar projects, average time to define a requirement, average time to deliver a new project, etc.
Since I had these kind of measures, then when we changed things, I could tell if it made any difference (see meeting madness for an example). It was amazing how few of our improvement efforts had any impact on any of these measures (see kick the habit for more examples).
One day the development VP announced that his product build team had significantly speeded up the build process (an often complained about problem). Now that they’ve speeded up their part of the process, the development team (which also worked for him) needed to now speed up their part of the process. Since I tracked this kind of performance (the speed a defect or feature went through our development process), I was intrigued at this claim and was wondering why I had not noticed it.
I dug into the data and quickly discovered a rather humorous pattern. The build team did indeed now have a speedier process. The first humorous part was that the development team had also sped up in that same time and in fact now comprised an even smaller percentage of the process duration than they had in the past. So build had improved but development had improved even more. None of this was visible to the VP (he later came to rely on my data) nor to the build team manager.
I sent the build team manager my findings. Often when I had done this kind of thing in the past, sent people data that put doubt to their claim, many people were quite unhappy. This build team manager not only appreciated the feedback (and that I had not mentioned it to anyone else) but also asked me to regularly update him on these trends.
The second, equally humorous, part of this insight, was that neither team had done anything special. In looking at past projects, it was just normal for both teams to speed up during this part of a project. It was not that they could not improve, it was just that we were claiming “improvement” when in fact it was just the normal pattern in the project performance (I’ve often seen folks claim improvements or problems based upon the random fluctuation of process performance).
Having simple measurements in place can make a huge difference in how well we perform and equally how well we improve. If the only time we want to measure our performance, in an objective measured way, is when we are pondering the need to change or improve, then we are probably missing a major and critical part of our everyday project management toolset. If we are doing our job, we should not need to set up or decide on measurements to determine if our improvement efforts are making a difference. They will already be in place.
What measures do you have in place on your project that lets you know how well you are performing and when you’ve improved your performance?
One thought on “Why Project Managers Should Not Set Up Metrics To Measure Improvement Efforts”
Comments are closed.
Related discussions from around the web:
James D. • If you had only one IT metric you could report on to your CEO and board, what would it be?
Bruce Benson • Before and after cost trend of IT initiatives. We know (or should) what things cost now and we can show (or should be able to show) what things are now costing. If the initiative generates revenue, then the cost line naturally reflects the revenue — so we still have only one metric.
I’ve seen so many IT initiatives (and other improvement efforts) that generated a lot of activity (consuming staff time if not also money) that had no measurable impact whatsoever, except in increased PowerPoint slides and new dashboard widgets.
While most of my successful measurements have not been cost related, measuring and knowing how much or how fast things get done (deliver new features, reduce defects, etc.,) readily shows the impact of any initiative.
In a recent article I concluded from my experience that if with a new initiative we first have to “figure out how to measure results” then we were already in big trouble, since we should already have in place the key measures we need to run the business. Any initiative should naturally cause a change in these existing measures. (Ref: pmtoolsthatwork.com/why-project-managers-should-not-set-up-metrics-to-measure-improvement-efforts/ ).