Home » Management » How To Make Our Management Blunder Proof

How To Make Our Management Blunder ProofThe Excel mistake is more of a public-relations disaster than a significant slip. Reinhart and Rogoff wrote that average growth in high-debt countries was -0.1 percent. The UMass researchers said it was really 2.2 percent. But the spreadsheet error accounted for only about .03 percentage point of that difference. … Even the UMass researchers found that higher debt is associated with slower growth. A Spreadsheet Blunder Upends the Debt Debate. Bloomberg Businessweek, Apr 22,2013.

Start By Making It Objective

I had just sent out my project status, showing where we were and what we needed to do. It, as always, was based upon trends in requirements implementation rates and defect detection and removal rates. I used these status reports not only to drive the project in the right direction but also to help “educate” skeptics as to the value of objectively managing a project rather than the less rigorous dogma based approaches that had not worked in the past.

As one might guess, many people were not happy with my new approach to managing that I was using and advocating. In many ways I was taking a lot of the normal management discussions out of the mix by basing how we were doing on hard data rather than traditional seat-of-the-pants approach. People who had a “seat at the table” to make decisions were not really needed as often because progress, or a lack of it, was now much clearer than it had been in the past.

See more Defect Reports Are Your Best Friend

Really Know Our Data

The problem was, I had pulled the wrong data in updating the most recent status. I was pushing for the teams to pick up the pace as they had slowed down after successfully achieving a notable milestone. I had said that if we didn’t pick it back up to match what we had been doing before that milestone then we would probably be late. The notion that someone could predict being late after a huge success (the first time we’d passed the early milestone without pushing out the date) this early in a project was considered outrageous by many people. The fact that I found what looked like a huge mistake in my data put me into a cold sweat.

As I was getting the data right, I realized that I might have just blown all my hard won credibility and had just given my critics the ammo they needed to finally pull the momentum out of my efforts. I even for a moment wondered if I should just ignore the error and “let it go this time” but that was short lived as I also stressed brutal honesty in management. My conscious wouldn’t allow me to do anything other than own up to having made a mistake and accepting that my “this you have to do now, get going” was just plain wrong. I was doomed.

See more If Nothing Else, Honesty Is Just More Efficient

I cranked through the new numbers several times checking that I finally had the right data set and the right time periods. The numbers I got were different … but only by a very small amount. My conclusions were unchanged!

I breathed a huge sigh of relief. I did publish an update that said here is a corrected update, but the resulting charts and trends were indistinguishable from what I had originally published and what I had called for immediate action upon (and caused a lot of grumbling and some skepticism).

Manage Based Upon Observations Of Real Events

Over the years, I’ve done this way too many times, found something wrong with how I produced my numbers. However, in not a single case has it ever changed the conclusion. I can tell you that in many cases, where often people argued for an opposing set of action based upon other data, that looking closely at their data very often resulted in completely different conclusions. So, how did I get so lucky?

See Almost A Great Test Organization

In every case where I generate trends and charts, what I’m doing is seeing what is going on in the organization or team or project and then looking for the numbers that characterize that status. Note, I am first seeing something, something I’ve maybe observed for many months (or found in many projects) where I want to find a way to characterize what I see.

One of my most successful examples was hearing the experts say we will fix key defects by the end of the week, but we kept seeing that it never happened, but we always based our management promises on these expert statements. Once I found a way to accurately measure the claim, I showed that what we always typically said was a three day effort averaged no shorter than seven days and often during different times of the project average ten to even twenty days. The key was I was describing something I was observing.

See Why We Don’t Really Need All Those Experts

Drill Down Into The Data When Conflicts Arise

In one case, a test director reported that the software my team released was the worst release as to quality we had ever had. Since it was our software and I had managed it I was pretty sure that this could not be the case, but I diplomatically asked a series of questions to understand the data he had used. The final conclusion, using the same data, was this was not only not the worst release, it was in fact the best quality release we’d ever had which matched our understanding and which the COO ultimately reported in her quarterly report. So what happened in this case?

To put it simply, the test director fell into the all too typical trap of having a conclusion he wanted to find and then cranking numbers until something came out that matched what he wanted. He stopped at this point and gleefully blasted out a conclusion that he later had to retract and that was so far off from reality that his credibility was damaged for a long time. It didn’t help him that the customers thanked us for finally releasing good quality software on time.

See also Scope Creep Should Be Mandatory

The key to my long term “luck” I attribute to always trying to measure something I could already see to some extent. Too many “bad” PowerPoint charts come from folks just cranking numbers until they see an interesting results. Since their results are not anchored to actually having observed what is going on or to actually having worked in the environment, they don’t find anything that actually represents some reality in the environment. I go so far as to say that the vast majority of PowerPoint slides I’ve seen that have charts and numbers on them are not well anchored in some reality. This can often be seen by noting that an exciting chart is never mentioned nor used again nor is the data on it ever used in daily or strategic decision making.

I survived my brushes with messing up because I knew the data and trends well enough that as long as my data seemed to match what I was seeing, I was going to be OK. If I had actually made a huge mistake, and I did often enough, the results would clearly stand out as not being right, as not matching the reality I was managing in, and so I would quickly notice and quickly find what I did wrong and fix it. These errors, while annoying and scary, still produced results that matched what I knew so that the chances of a large error ever getting past me was pretty remote.

See Living With Our Data

Knowing our data and our team’s performance are the best answers for blunder proofing our work. Sure, we might make a mistake upon occasion, but our chances of making a huge mistake, coming to completely the wrong conclusion, is almost non-existent because we are not looking to find new insight by data mining alone but we are trying to clearly and objectively describe what we are seeing, every day, in the management of our project.

Finally see First Have An Idea Then Mine Your Data

How do you help ensure that your management data and conclusion always reflect reality and therefore hold up with time?

Thank you for sharing!