Home » Metrics » Forgetting The Past

Forgetting The PastThose who do not remember the past are condemned to repeat it. George Santayana

I worked with a VP who had a great memory and a great grasp of detail.  The problem was, I perceived, that his organization and span of control now exceeded his ability to manage everything in his head.  Me with my meager mental abilities had to rely on putting systems in place, but these systems did a much better job at managing the complexity than did his clearly superior intellectual abilities.

I have a lousy memory.  My wife reminds me of this daily.  Yet because of my less than notable memory I construct systems that have memory built into them.  When I put something away or down or file it, I usually think in terms of “what would I probably be doing when I next need this in the future.”  I then set things up so I “stumble” upon them in the natural sequence I would need them.  My systems have memory for me.

The point being is that we all probably need help remembering things.  In remembering them it improves the chance that we will plan or manage things well.

My classic example is how the “experts” who deal with issues every day are often very unaware of how things go or how to plan for them.  In many an occasion I’ve work with organizations to try and remove their unrealistic planning expectations.  This usually has meant that schedules were downward driven and based upon estimates that were not rooted in reality.

What I then find, once I’ve freed a development or product team from silly schedules, is that the teams are not very good at making schedule estimates themselves.  They know the schedules that have been imposed upon them were no good; they had plenty of proof by the number of projects that significantly missed their schedule and functionality goals. Yet when freed to do their own planning they are often just as bad, just as inaccurate as those downward directed schedules.

See also Past Performance Does Not Guarantee Future Results – Except Here

I noticed that generally their planning suffered from the error of omission.  What they forgot about is what would then go wrong, because it was not accounted for. When they included the things they did remember, that is what usually went OK.  In one major effort, where I was a manager on the product side of development I found that I knew better how development performed then the development managers did. This was simply because I effectively tracked how they performed, where development did not do that for themselves.

Development did track what it called its performance.  The huge difference was that I, on the product side, cared about things from end to end.  The development team only wanted to understand how things went when they got an issue or feature and only after it was fully understood.  They basically abstracted out so much of the actual process that the numbers they collected had no correlation with reality — what was actually seen in practice.  Instead, the numbers I collected and tracked matched well with what we actually observed.

Development would say it takes them three days to resolve an issue.  I would tell the account team or customer it will be 10 days and we would just make it working evenings and weekends. That is because the average we had been seeing was 10 days, 333% longer than development’s statement. Keep in mind that 10 days was an average, so half the time we would expect to make it and half the time we would miss our mark. So my estimate of 10 days was aggressive, but development’s three days was just loony.  They swore by those 3 days however because a lot of things did get fixed in three days.  Just not the typical, most prevalent, ones.

See also your Averages Are Powerful

I noticed this in a big way at a fortune 50 company who had world wide brand recognition.  We were in what were apparently the last stages of product development and just weeks from shipping a product out the door.  I was at that time managing part of the effort.  I had some experience now with how long it took us to make changes and fix defects.  The development managers who had been doing this much longer than I were often promising to have issues fixed in a few days.  After being part of this process for some time, I started — out of curiosity — to track how long each issue was actually taking to be fixed.

I made several observations from this tracking.  The first is that whatever issues we were talking about today there was a great chance that we would be talking about a new set of issues in a day or two.  The second observation was that the normal estimate of fixing the issue in 3-5 days, as often stated by the vastly more experienced development managers, rarely happened.  In fact, it turned out to be closer to two weeks to fix.

How could such experienced people in such a successful organization so consistently get their estimates wrong?  How could they be wrong so consistently without anyone ever noticing or mentioning it?

A couple of reasons seem to be the root cause:

  • There is so much going on and changing so often that the memory of what all happen is very limited.  I once asked a development manager about something that happen two weeks ago and he laughed and said that there was no way he could remember that kind of detail.
  • In those couple of days there were a lot of issues resolved in 3-5 days.  The longer issues however, which drove the average, disappeared into the background and were worked on by the developers as the managers jumped to the next hot issues that came up.  So what the managers worked on were rarely the “average” issue, but instead the hot issue of the day.  What became true was the vast majority of issues — that constituted the average — were never dealt with from the beginning to the end of the issue by the managers.
  • No one ever thought to track how long things were taking.  The quality and process folks who would do some of this because of the standards they used, were distant enough from the ongoing activity that at best their numbers came out way after they were needed.  It was also the case that the numbers were too bunched together and averaged to be useful.  The best example here is back to the average time to fix a defect.  It turned out that this average time varies over the life of the product.  Early on defects will sit around for a long time and eventually get fixed while late in the product cycle issues will get fixed very rapidly. The quality and process folks, being again too far from the practical aspects of management, would average all the defects together over the life of the product and get a number that was pretty meaningless and provided no strategic or tactical insight for managing day to day.

So while there exists the oft stated notion that you improve what you measure.  I would extend that to also say you remember what you measure.  This memory is important not only to further improvements, but in doing planning and risk management.

I also found that just being handed the planning data was not always the best way to get it.  If it was our data, I always found it helped to be the one who analyzed it.  This way when we saw averages or graphs, we also knew the trade-offs and assumptions that were made in putting up those numbers.  Too many managers, having a staff or quality group to give them numbers, never had a good feel for them.  Too often they would report numbers that with any moment of thought would make no sense.

In one case, the development team was responsible for describing how they will detect and drive down defects.  This exercise was to allow them to resource the defect drive-down phase of the product.  What they did was just to “draw a curve” which front loaded all the defects they thought they would see and magically, it shows that all defects would be gone on the date they were to be done.  Since it was front loaded, and didn’t match how things happened historically, as the data came in the curve of actual defects was much lower than the expected arrival of defects. Wow they exclaimed, look how much we’ve improved quality!

Needless to say, the arrival rate of defects blew past their date of when they would all miraculously be detected and resolved.  As humorous as this was, this was in a world class organization that spent over $100 million developing a product.  If they had been folks who actually analyzed their own data and observed it as it happened during development they would not have been so cavalier in generating such useless plans.

Knowing how our organization performed, by tracking actual historical performance, made a huge difference in improving our on time delivery of new products.  Not knowing how we perform simply resulted in unrealistic claims being the normal ways we did business.  This resulted in turn in not delivering on time and not delivering with good quality.

How do you know what the real historical performance of your team is?

Thank you for sharing!