Project management is often all about understanding how the project is progressing. Here, using an example of an individual’s health results, I highlight how we can get it wrong even with “good” progress if we don’t know the trend.
I always received great results from my physical checkups. When I was in the Air Force (exercising regularly and much younger), one examining doctor told me that I had the best results (blood work, pressure, pulse, etc.) that he had ever seen. Throughout the years I got the same type of results. My most recent physical indicators were all “perfect” according to my doctor. Boy, I must be doing everything right, yes?
Let me explain.
I recently purchased a new weight scale for the bathroom. Not only did it give me my weight but it also knew my height so it computed and displayed my body mass index and then displayed an evaluation. Normally, it would just say “good.” One day it said something like “nearing overweight.” What I thought? No way!
I had been the skinny guy all my life. No way I could ever be overweight. Yet according to the scale’s built in standards I was approaching being overweight for my height.
I then pulled out all my yearly physical results and charted them all together. What did I see? Every measure I had (weight, cholesterol, blood pressure, pulse, etc.) were all better than normal but the trend in every measure had been headed in the wrong direction … for years.
See more on Why Getting Objective Data Is Hard
The problem is that just using a single data point as an assessment often doesn’t tell us what we really need to know — until it is too late. Getting a dozen different metrics about different parts of our project has the same problem. Each single measure, isolated from its history, is often a very misleading indicator. Even having an expert measure and report that measurement often doesn’t improve the information value of that data (e.g., my medical exams). We don’t know there is a problem until we approach or pass the danger threshold.
Dig deeper into Why We Don’t Need All Those Experts
What we almost always want to see is not only the current number or indicator but what the trend over time of the indicator looks like. What makes this sometimes difficult is it is not always obvious how to capture the trend. A few examples:
I’ve managed the fixing of defects in products for most of my career. Typically we would just show how many defects we knew about. Only occasionally would an organization show how the backlog of defects changed over time (usually rising, peaking and then falling). Never, except on my projects, had anyone ever trended the arrival rate of defects (which also rises, peaks, and falls over the length of the project). The arrival rate of defects was a better indicator of when we would finish a project then the current count of the backlog (which is just an arrival rate minus the disposal rate at any point in time).
For more details see Defect Reports Are Our Best Friend
I’ve managed feature lists (requirements, stories, use cases, backlogs, etc.) all my career. I would track the rate at which the features were committed (e.g., features per week committed over the entire commitment period). This rate curve would tell me better than any powerpoint estimate I’d see when we would know enough about what we were trying to do to make a final commitment on when we could do it. When committing our features was not going fast enough, management would resort to off-site all-hands planning meetings and the like. I would show, somewhat humorously, how the commitment trend did not change even when we did these “surge” exercises to speed things up (but only if we allowed people to remain honest!). The trend told us how our current people + processes + tools were performing and when they would most likely be complete.
I’ve managed schedules all my career. The notion of looking at past schedules and seeing how long things took and what the trend was for getting to milestones (i.e., how long it took to achieve each milestone) was rarely employed. The biggest argument against using past schedules was that every project was different so how can we compare, trend or average them out? Besides, it was often the case that the last project was not a roaring success, so no one wanted to use its experience for planning the current project.
Did anyone know this because they had tried to use past performance? Nope. It was always a thought experiment (a theory) that was just “too obvious” to even confirm. Like my weight scale telling me I’m approaching overweight, knowing our recent past history and comparing our current performance to it (i.e., we are comparing our current trend to our historical trend) tells us how we are doing in a manner that is meaningful (at least more meaningful than “we are 90% complete!”).
See more on Are We Managing Or Are We Theorizing?
In all these cases, using a single metric (e.g., defect backlog count, feature commitment percentage, schedule completion percentage) didn’t provide us information to actively manage the project (unless it was so bad that it was obvious). We got a number and as long as nothing “looked wrong about it” we just happily reported those numbers and then moved on to discuss today’s current issues. Don’t get me wrong, we always had possibly dozens of charts required by some standard, management or quality initiative that purported to tell us all sorts of relevant info, but they never seemed to be used to make decisions or even discussed in any depth.
As with my medical examination results, I never knew how I was really doing on my projects until I looked at the trends. Until I saw our trend and compared it to past known projects (or medical or industrial or military standards), I was managing without any real knowledge. Knowing the trends and knowing how the trends looked on past projects was profound insight that helped us rocket stalled projects and under-performing organizations to on time with good quality results.
What metrics do you use that might be more useful if you could see the longer term and past trends?