Project Management Needs Business Intelligence!
Business Intelligence is insight into how our business is operating. The development of new products is a business activity. Why not use BI techniques in our project management? Projects were more successful when we leveraged our BI than when we used traditional project management tools alone.
There was maybe a half-dozen of us. When a significant software defect was reported, the bunch of us would “jump on it.”
We weren’t software developers. We were project managers. We would rapidly find the poor developer who was working on the problem, pepper him or her with questions, demand a solution by today, and then run off to report the “latest and greatest” to more senior management. You see, knowing the scoop on the current “hot issue” was what got project managers noticed and rewarded. It showed that we were on top of the issues in our projects.
After watching this go on for a few weeks, I decided to focus on the other 99% of the defects. This was a possible death knell for me. I was no longer standing in front of high-level project reviews and breathlessly reporting on the latest progress on the current “stop shipment” defect. An entertaining part of these meetings was when the individual PMs would try to one-up each other. There was nothing like announcing the problem had a solution right after another PM had just reported that the defect had not yet been fixed. Touché! One point for me!
Silly. Yet the entire product and project management structure was focused on these individual defect reports. Reporting often went all the way up to the COO of one particular Fortune 50 company.
The problem was, these individual defects made very little difference. They were predominately noise. You see, some years earlier, I had been managing defects and looking at the patterns in them. I had found a rather profound and consistent pattern over all our products. Sixty-six percent of all defects reported turned out to be non-issues. Half of this 66% (so 1/3) were generally duplicate reports of known existing issues. The other half (last 1/3) turned out to be problems in testing or configuration of the product and not a real software defect. It averaged about seven days before it was determined that any given defect was a non-issue. It was also the case that it took seven days, on average, to figure out that it was a true new defect and to have a working solution for it. So two-thirds of the time the final report on a defect was ‘oops, never mind, no software problem found.”
If 60 defects were reported today (not untypical during the middle of the project), only 20 of them would turn out to be issues of real interest. Even then, the average time to complete a fix to an issue was fourteen days. During those fourteen days we would see another 50-60 defects being reported per day. Today’s hot issue would be overwhelmed by the hot issue that came in tomorrow.
The dynamics were the same when reporting status. In particular it was great fun in a meeting, where one PM was reporting on the current hot topic to the enraptured attention of more senior management, for another PM to announce that an even hotter issue was just reported. The entire meeting’s focus would change to this new issue. The old issue was, as often as not, never mentioned again. You see, if the issue had been around for a few days it was never as critical as the brand new shiny issue that just showed up today. Not knowing about an issue, not being able to rattle off the details and repair actions upon demand, was an indicator that we were not on top of the project. This applied to both project managers and to senior managers. Senior managers often found their bosses grilling them for details based upon the latest buzz generated by today’s incoming defect reports.
Instead of participating in this rodeo, I just cheerily reported the current arrival rate and departure rate of our defects. The departure rate was defects either being fixed or being identified as existing issues or as product misconfigurations. I also reported on how long defects were taking to be fixed. The average time to determine if a defect was real or not was seven days. The average time to fix a real defect (fixed in a product build) was about fourteen days. The killer number came out when I reported that the time needed to complete just about all the defects (95% of them) reported in any given period was running consistently at 28 days. What this last number meant was that when we got in a new crop of 50 issues today, once they all got sorted out, it would be on order of four weeks before they were all solved. Heresy! We needed everything fixed by the end of the week to meet our scheduled delivery date!
I then inevitably added insult to injury by reporting we still had a steady, but decreasing, defect arrival rate. This arrival rate, which dropped on average about 20% per month, indicated we had several months of significant incoming defects still to be experienced. Add on the fact that once they came in it took four weeks to sort them all out, then we had many months before the product would be ready to ship. In a moment of clarity, and maybe despair, a VP of Development once told me “Yes Bruce, I’ve seen the numbers, but we have to ship by next month for the company to survive.” Four months later, we finally shipped as predicted by the defect trend.
Where did I get these defect numbers, patterns, and trends? I pulled this insightful and predictive data from the company defect reporting system. I downloaded the entire defect database into Microsoft Access (a wonderful tool) and then computed the various rates and trends. I did this every morning first thing when I got into work. The whole process, which I had semi-automated using Visual Basic for Applications as my mashup tool, took about 30 minutes to download and crunch the numbers. If I tried to download at another time in the day, it might take hours as the system was heavily loaded by everyone using the defect reporting system. I had an advantage of having been a programmer, and so I could overcome all the barriers of getting and using the defect data. Luckily, such business systems are increasingly accessible via web services or otherwise enabled to support real-time business intelligence analysis (see for example “Net-Gen BI Is Here,” Information Week, Aug 31, 2009.)
The company did have a web site that displayed various defect reports that anyone could access. It generally was used to see what was happening on a daily or weekly basis. Monthly and quarterly trends were available but rarely used. This company was very tactically oriented and longer term trends didn’t seem to make sense to many people trying to ship a product by the end of the week. The most important information in my experience, the arrival rate of defects, was charted along with other defect data. This caused the arrival rate and trend to be buried and hard to see in the charts. Also, there was the propensity to show daily defect numbers which generally jumped around from day to day making long term trends almost completely invisible. Averages on how long defects took to fix were, very simply put, calculated incorrectly on the official site. Humorously, these calculations were even larger than the averages I showed. This incorrect official calculation served the purpose of making all such averages suspect since clearly the “official” calculation was so far out of line that it could be safely ignored. (However, see Knowing Your Average – Project Management Tools.)
I also could readily see what the past defect curves looked like over the life of the project by using previous product’s defect data. I used these as projections, adjusting for the current project, of expected defect arrivals (see Defects Are Your Best Friend – Project Management Tools). This business intelligence data was much more reliable and predictive than anything we had collected into Microsoft Project plans. Additionally, status reports and red-yellow-green charts provided no real objective data to assess how the project was really progressing. Don’t get me wrong, if the reporting of tasks completed, resources assigned, etc., had rigor, then, in fact, they could have been very good indicators. Instead, using data that was coming directly from the development business systems used by everyone to do daily work made it automatic and very reliable.
What I found over the years is the best indicators of how things were going usually came from the business systems everyone used every day to develop the product. Reporting exclusively through Power Point slides or Microsoft Project or even through an enterprise project management tracking system was always labor intensive and error prone. I’ve also used requirements tracking systems, software configuration systems, product build management systems, and product release tracking systems to again get good information on how things were really going. These business systems had the characteristic of being updated continuously, as people did their work, and were less vulnerable to data “tweaking” (i.e., making the data look better than it was). They also generally had the characteristic of being available to everyone so that when someone used the data to make a report, anyone could pull the data themselves and compare it to what was being reported.
The final and best characteristic of using actual business system data was we could often get the trend data from completed past products and projects. We could see what things had looked like when products similar to ours had been produced. This allowed us to model how our products were expected to go. This business data provided us with great objective baselines to know when our projects were off course or when the problems we were encountering were non-issues (e.g. normal for a project). This would often be in spite of a half dozen project managers breathlessly reporting on today’s killer issues.
Look into using your existing business systems as part of your project management tools. The business systems used by everyone to do their daily jobs can often provide more current and predictive information than relying solely on our traditional project management tools.
5 thoughts on “Project Management Needs Business Intelligence!”
Comments are closed.
When companies commit *strategically* to ALM and adopt it *correctly* across the lifecycle, it does provide (at the minimum) what you describe, because that is simply data aggregation.
Of course, only 1% do it strategically and correctly, thus the common perception that you point out, that ALM is still not fully baked…
I’m a tools nut. I use them or build them, but I’ve gotten most of the bang for my buck by getting them to work together in ways few had ever anticipated. I’ve also coaxed out of them information and insights that few thought could be there. So I have no illusion about what tools can do when used well.
I will say that just about every advocate of a tool, tool suite, development methodology, you-name-your-silver-bullet, eventually say something similar to “give your soul over to this and you’ll reap ….” Again, in 30+ years I’ve never seen reality meet the vision (I lived through ICASE in the 90s for example).
However, I’d love to see more examples from ALMs that provide:
1. Defect arrival curves and projections to answer the question “when will we be done?”
2. Project data (milestone completion curves, requirements curves, defect curves, etc.) that answers the question “what schedule should we allocate for this project?”
I’m not saying that systems don’t try to answer these questions, only that they often – too often – come up short in practical use (then I get a call).
Bruce
Ok……but, you do realize this is exactly what ALM suites already do — when they are used across the lifecycle — right?
Mark,
Good point, though I tried to allude to something similar with “enterprise project management systems.”
This deserves a whole article by itself, but in a nutshell, like any tool set, an ALM must deliver on its vision and promise. “What ALM suites already do …” might be better said as “already trying in part to do.”
Good comment. Thanks.