We had a great discussion on LinkedIn on the article The One Key Step For Successful Project Improvements. The discussion strayed into a topic near and dear to me, which is how do we get a good estimate of an overall project schedule.
I am going to reproduce part of the discussion below in this article, but if you are on LinkedIn you may be able to access it here.
This part of the discussion started when Yuri Tan remarked:
Learning from past mistakes requires, as Bruce says in his get-schedule-right article, that there be adequate records of completed projects. Yet, I have worked on projects in organizations where either no records were kept or incomplete/wrong records were kept.
Here are the next few comments in the discussion that illuminates what often happens in a company and some of the actions it takes to help an organization get to realistic schedule committments:
[Bruce] Agreed. What I would do is dig to find the evidence needed to see how we performed in the past. Official records were often misleading. Instead, I could often find how long something really took, for example, by looking at successive meeting minutes that were kept over the course of the project. (This approach was natural for me, as I spent some time as an intelligence analyst in the military. We learned not to trust “official” records and statements, but look for indications of what truly was going on. Forensics may be a better skill than project management training :D)
In one typical find, I was looking at Microsoft project plan to see how a product similar to the one I was to start – how the project went. The completed plan looked pretty good. Total duration was good. Detail was good.
Luckily, however, the project manager had versioned the plan so I could see *all* past revisions. The actual project, replanned maybe a dozen times, took over twice as long from true beginning to true end. It turned out this “twice as long” schedule was close to reality for how long similar products would eventually take then the final version of the plan.
I’ve just about always been able to find the evidence for past performance, though it often required very deep digging. Once this is dug up, we then start to have a basis to reality check current and future commitments. This kind of evidence is just about always better than even the collective recollections of managers and experts who lived through the efforts.
[Lanis] Great points yet again.
Just curious, when you pointed out the reality of a previous project vs. the recollection, what was the business’s reaction, especially if others gave much lower and invalid estimates?
Response from the business/account managers:
It can’t take that long.
I can’t tell my customer that.
How come is it now taking so long? (it is the time it has always taken)
Response from senior managers:
1. We have to do it faster than that.
2. Our competitors don’t take that long.
3. OK, we’ll have a goal to do it 20% faster (which aligns the schedule to the historical schedules that missed all their schedules – this particular notion was picked by a Fortune 500 CEO)
The response from other managers
1. Why can’t YOU do it faster?
2. Why do we have to have the same results we’ve had in the past?
3. Using past performance is the “old way” we want to do it a “new” way (then proposed a schedule that matched the failed schedules from the past)
4. Development didn’t say it would take this long, they accepted the shorter schedule we proposed (using this data, we knew better how development performed then development did)
The response from our customers?
1. It is about time you started giving us realistic estimates
2. You were “too nice” in the past and didn’t push back on our demands
3. Your plan looks “relaxed” compared to previous plans (no objection, just an observation)
4. This is the first time in our memory that you delivered around when you said you would deliver (upon completion of the project)
Finally, I often did a bar graph showing the past actual performance (e.g.15 months) the fastest we ever launched a product (e.g. 13 months) and then the proposed schedule (e.g. 11 months). The difference between the average and the proposed would generally be over two sigma, so the odds of success were about 5%, in addition to the fact we had *never* delivered it that fast before.
The final data point was that we had always proposed short schedules (e.g. 11 months) and have *never* hit them. This was usually in response to “but we will this time, we are doing something different!”
I try to use the data to do most of the talking for me, to let it sink in. I try for everyone to see that data well in advance of meetings, so it is no surprise.
I find that most senior managers stay fairly quiet and let their subordinate managers challenge the data. Generally the senior/executive VPs appear to wait and hope that there will be a flaw in the data or sufficient doubt about its accuracy. I believe that even if they personally feel the data is probably correct, the business culture often rewards them better for trying and failing, then it does for saying “we can’t do it in that period of time.”
Making a real change is rarely easy. A real change, one that goes to the heart of a fundamental problem, is usually very difficult. It is difficult not because it is hard work technically or managerially. It is difficult because it often requires people to acknowledge things they’ve said and done for a long time may no longer be appropriate. They’ve put a lot of themselves into trying to cause things to be a certain way, and the change appears to invalidate too much of their past efforts.
Getting to on time, with good quality, project delivery is hands-down the best confirmation that we had made a good choice.