Home » Metrics » Project Management Metrics Madness? Don’t Do It!

It happened pretty regularly. To make some functionally work on the project we needed something else that we overlooked. No problem. We make the change — tag it as a missed requirement in our project management tool — and approve it. A couple of weeks later the additional functionality is completed and gets submitted to be put into the product. It is submitted as a “new requirement.” It shows up as scope creep. I called up the project manager and explain they tagged it incorrectly. It was a missed requirement, technically a requirements defect. Please change it to a missed requirement.

Unhappy With Project Management Tool Requirements Creep“Oh no!”, she practically yells, “It is a new requirement! No one told us it was needed. If we had been told we would have done it!” I try and explain that yes, for their internal team, it looks like something new, but from a product perspective the development team missed a dependent requirement and so it is a missed requirement. She would have nothing to do with it. It was new work for her team so she is categorizing it as a new requirement! Jeepers, I thought, no wonder the metrics seemed so confused.

When we finally deliver this huge project, on schedule, I was interested to see a group working on analyzing the requirements trend on the project. Since I had managed these requirements, I offered to send them all my data on the requirements (new, changed, deleted, etc.). They were extremely happy with this and went about analyzing the data I supplied them.

I later got a copy of the requirements trend analysis. Their conclusion? Rampant requirements creep! Oh, boy. How could that be? We had some growth, but we had held the growth down to a handful of additional features in a portfolio that had almost 200 requirements. I was rather proud of the fact that we didn’t need to add a lot of new features and we launched on time.

As we discussed these conclusions and the various charts they had produced the analysis team faced a dilemma. The dilemma was the traditional claim that massive requirements creep was the primary reason we had always been late in the past. Yet, if their claim was correct, we had massive requirements creep and we delivered on time with a product that had good quality. So if I was to sign off on the report, we would be agreeing that rampant requirements creep did not necessarily result in late projects. This they did not want to say. I offered that the other conclusion, the one that my raw data showed, was that we actually had very little requirements creep. So while I still didn’t necessarily believe that requirements creep had been the primary source of problems in the past, if we used my actual data and charts of what had happened, we would see very little requirements changes and that would seem to be logical based upon our on time delivery.  Eventually, the discussion just died off. I never was asked to “sign off” on the analysis and I never heard anything about it again.

Project Management Tool Metrics Collection DisciplineHow can we get into a situation where we can’t clearly see if we have requirements problems or not? The story at the beginning of the article provides a hint. The data discipline of the organization was not very high. What got tagged as what was just as often a political decision or to get cooperation from a team. It was not always with a purpose to answer a specific question such as “how much have our requirements grown over the course of the project?” For me, as I tracked the changes, I independently recorded them as simply as possible (e.g., new, changed, administrative updates, etc.). The requirements trend analysis team “knowing” that requirements creep is always a problem decided that many of the administrative actions (e.g, split one requirement into two parts) must have constituted new requirements. They re-categorized as much as possible to “new requirements” to support the the organizational belief that we always had large increases in requirements.

With any metrics the first thing we need is a question that we truly want to answer. If we have an organization that generates lots of data and charts but it does not seem to answer any clear question or is at odds with what we know then take a good look at how the data is recorded. We may need to back up and first figure out the real question we want to answer and then get some data discipline behind it to avoid metrics madness.

Thank you for sharing!

3 thoughts on “Project Management Metrics Madness? Don’t Do It!

  1. Bob Light says:

    True that Bruce, on the laughing vs crying.

    I have definitely been guilty on occasion of compiling numbers into “interesting” metrics, but indeed, that often leads to demotivating or undesired behavior though the intent was the opposite.

    I find with metrics, especially early attempts, that they must by necessity be fluid and questioned constantly to make sure they are useful and meaningful.

    Hope this didn’t get too far off your post!

  2. Bruce Benson says:


    That was definitely the case. The development teams were being measured by such things as defects inserted, repaired, new requirements, missed requirements, etc. This was intended to bring objectivity to these issues, but instead became another game to be played.

    This was in part because the metrics were never designed to answer specific questions – just to produce numbers that would be “interesting” to see. This stressed teams that have no ideas how this data would be used – so it motivated less than ideal behaviors.

    I found things like this very humorous, as it was easier to laugh than to cry and bang one’s head on the wall!



  3. Bob Light says:

    Is that the sound of a head banging on the wall..;>)

    A great post, describing an often repeated senario in the business world, and not limited to projects. The issue is not just creating good stand-alone metrics (“number of new requirements added to a project after it has started” being the one in this case), but understanding how metrics are used in an organization. For instance, if the dev teams’s performance review is based on this, then they absolutely will argue in support of calling it a new requirement vs a missed one, which might be a negative metric to them.

    These situations arise, that is a given. The best way to resolve it is via strong leadership and putting a good piece of padding on the wall…

Comments are closed.