The seminal work of Lauri Koskela in 1992 challenged the Construction Management community to consider the inadequacies of the time-cost-quality tradeoff paradigm. … Analysis of project plan failures indicated that “normally only about 50% of the tasks on weekly work plans are completed by the end of the plan week” and that constructors could mitigate most of the problems through “active management of variability, starting with the structuring of the project … and continuing through its operation and improvement.” http://en.wikipedia.org/wiki/Lean_construction
This is bad, right? Only about half the tasks scheduled get accomplished in the required week. I am not an expert in this kind of construction but I do know something about information technology (IT) and software projects. What jumped out at me from this reference in Wikipedia was the thought “great, I can work with this!” Why is that?
Let’s start with two additional thoughts that came to mind from my personal experience. The first is that whatever we plan, as to say that tasks that will get accomplished in a particular period of time, is generally just an estimate, an educated guess, as to what will really get done. The second notion is that if the above is really consistent, it averages out to 50% each week, then we can readily adjust our next project schedule to take this performance into account (for more on this see your average and get the schedule right).
Why both these notions are important is that we often hang onto the idea that a plan represents something fixed and that we want it to proceed in this fixed, deterministic, one-thing-follows-the-next progression. I’ve come to realize that any given plan is an approximation to what will really happens, and that planning has to take this into account. Helmuth von Moltke characterized this notion as “No plan of operations extends with certainty beyond the first encounter” but he also hints that planning is still critical so that we understand the range of options we can employ as things inevitably deviate from the initial plan.
Tools like Microsoft Project Plan may reinforce this notion that a plan has to be a series of fixed and dependent (predecessor and successor) tasks. Real life is often not like this at all. While in software intensive development, we’ll need certain capabilities or equipment developed before we can fully develop other capabilities, this is not always that much of a limiting dependent activity. I’ve never known a team that had such a dependency every being fully blocked when that dependency was late. Teams always came up with ways to continue to make some progress, even as dependencies were not available.
In one very successful project, the hardware needed to run with the software was months late. The team, always expecting the hardware “next week,” eventually started to test their software by injecting pretend hardware messages to see if their software logic at least performed as expected. Once the hardware finally arrived, they were amazed to discover that their software fired up on the hardware and practically ran perfectly the first time. The lack of real hardware to work with caused them to spend significantly more time studying the specification and what could go wrong and in doing so increased the quality of their software dramatically.
So our dependency laden project plan I always characterize as a general estimate of dependencies and their potential effects and not necessarily an unyielding description of what needs to be done. In fact, as large software intensive projects progressed, I’d periodically “compute” the critical path, and it would generally change every few weeks. I always thought it looked like a horse race, showing what tasks appeared critical at any point in the project with the set of tasks being critical changing over time. This reminded me that while being able to compute such a critical path was an interesting problem someone solved algorithmically, putting too much faith or reliance on such a “critical path” could cause someone to mismanage or misunderstand the normal risks and variations found in a typical project.
Understanding our discipline of project management includes mapping our tools and techniques onto the real world and seeing how they help us with these real world situations. Any tool will only approximate what is going on in our project (e.g., only 50% of the planned tasks get completed for example). While such situations might look bad, as experienced project managers who have done our planning we will immediately see the opportunities and know how to adjust our execution to keep the project on a successful track.
What insights would you provide to a new project manager on how well our tools and techniques really work to manage real projects?