It happened pretty regularly. To make some functionally work on the project we needed something else that we overlooked. No problem. We make the change — tag it as a missed requirement in our project management tool — and approve it. A couple of weeks later the additional functionality is completed and gets submitted to be put into the product. It is submitted as a “new requirement.” It shows up as scope creep. I called up the project manager and explain they tagged it incorrectly. It was a missed requirement, technically a requirements defect. Please change it to a missed requirement.
“Oh no!”, she practically yells, “It is a new requirement! No one told us it was needed. If we had been told we would have done it!” I try and explain that yes, for their internal team, it looks like something new, but from a product perspective the development team missed a dependent requirement and so it is a missed requirement. She would have nothing to do with it. It was new work for her team so she is categorizing it as a new requirement! Jeepers, I thought, no wonder the metrics seemed so confused.
When we finally deliver this huge project, on schedule, I was interested to see a group working on analyzing the requirements trend on the project. Since I had managed these requirements, I offered to send them all my data on the requirements (new, changed, deleted, etc.). They were extremely happy with this and went about analyzing the data I supplied them.
I later got a copy of the requirements trend analysis. Their conclusion? Rampant requirements creep! Oh, boy. How could that be? We had some growth, but we had held the growth down to a handful of additional features in a portfolio that had almost 200 requirements. I was rather proud of the fact that we didn’t need to add a lot of new features and we launched on time.
As we discussed these conclusions and the various charts they had produced the analysis team faced a dilemma. The dilemma was the traditional claim that massive requirements creep was the primary reason we had always been late in the past. Yet, if their claim was correct, we had massive requirements creep and we delivered on time with a product that had good quality. So if I was to sign off on the report, we would be agreeing that rampant requirements creep did not necessarily result in late projects. This they did not want to say. I offered that the other conclusion, the one that my raw data showed, was that we actually had very little requirements creep. So while I still didn’t necessarily believe that requirements creep had been the primary source of problems in the past, if we used my actual data and charts of what had happened, we would see very little requirements changes and that would seem to be logical based upon our on time delivery. Eventually, the discussion just died off. I never was asked to “sign off” on the analysis and I never heard anything about it again.
How can we get into a situation where we can’t clearly see if we have requirements problems or not? The story at the beginning of the article provides a hint. The data discipline of the organization was not very high. What got tagged as what was just as often a political decision or to get cooperation from a team. It was not always with a purpose to answer a specific question such as “how much have our requirements grown over the course of the project?” For me, as I tracked the changes, I independently recorded them as simply as possible (e.g., new, changed, administrative updates, etc.). The requirements trend analysis team “knowing” that requirements creep is always a problem decided that many of the administrative actions (e.g, split one requirement into two parts) must have constituted new requirements. They re-categorized as much as possible to “new requirements” to support the the organizational belief that we always had large increases in requirements.
With any metrics the first thing we need is a question that we truly want to answer. If we have an organization that generates lots of data and charts but it does not seem to answer any clear question or is at odds with what we know then take a good look at how the data is recorded. We may need to back up and first figure out the real question we want to answer and then get some data discipline behind it to avoid metrics madness.