Home » Metrics » Why “Let’s try this” Is Not A Metrics Plan

Why Let's Try This Is Not A Metrics PlanI found an Interesting article by James Slavet entitled “Five New Management Metrics You Need To Know.”  I love metrics but it is too easy to come up with what I call “let’s try this” metrics that turn out to be rather impractical and often counterproductive. Here are a few thoughts, using James’s metrics proposal, on what to consider when considering new metrics.

James’s first metrics is “Flow State Percentage.”  This is where we measure interruptions to people’s work so we know how much we interfere with their concentration.  Ugh, I think, this will also interrupt the person we are trying to help concentrate.  We have to remember that each time someone is interrupted, to record that fact.

Now, I’ve used this idea in the past. The idea of helping people to stay focused.  The idea is great.  For example, I have been able to measure how many meetings exist in the day (look at the central scheduler, if we have one, or remotely scan each individual’s calendars.).  I’ve been able to measure how many e-mails get sent or received. For example, e-mails peak on Tuesdays, dropping through the week but with Monday being larger than Friday.  Both of these measures work as a pretty good proxy for interruption levels. Ideally, my goal is to find a good proxy without further interfering with the individual or creating a new measurement.

Find An Existing Way To Measure What We Want

The one big item I push when doing metrics is to find an existing way to measure (directly or indirectly) what we want.  Asking someone to collect new data, I’ve found, is almost always associated with a metrics collection effort that eventually fails.

There are a lot of good things we can do, but beware of asking the harried and interrupted person to be further interrupted by a new metric.  See if there is a way to measure it or improve things with what already exists.

For example, James’s ideas for making meeting free days is great.  I’ve sent people home to work.  I knew a manager who sent people to his beach house for the day (a person’s tan or sunburn was used to estimate productivity).  I’ve recommended setting one’s phone to go immediately to voicemail and setting one’s e-mail reader to only grab new e-mail once an hour and finally turning off chat/IM for an hour.

I’ve often found that there is an existing approach or indicator and it is ultimately better and more reliable than implementing a new manual or otherwise human input driven data collection.

See more on You Probably Already Have All The Data You Need

James’s second metric is “The Anxiety-Boredom Continuum.”  This is where we are to check in on our folks periodically (so we as managers are collecting the metric) and see where they stand, from being too stressed to being bored.  OK, another interesting idea — but not quite as compelling as the first, at least to me.

I’ve actually seen bored peopled, but they were mostly in my government jobs.  I do admit to constantly tuning the level of work on my teams so that they are nominally pushed but not overstressed.  How they are doing I generally leave to seeing their results.  At each staff or project meeting I will make sure I give everyone a chance to talk, even if they have not volunteered to do so. While a meeting is a public forum, I can usually tell how they are doing by how they talk and what they talk about.

Also see How To Productively Stress Your Team

While it is an interesting notion to try and measure anxiety and boredom, what I would call the stress level, there are always a lot of “how about” metrics that are more noise than substance (and usually produce unreliable indicators).  Fundamentally, I’ve found that if we get the project, team or organization working well — hitting their marks and improving, delivering projects on time using realistic schedules — then that process gives folks plenty of opportunity to bring up their concerns.  So my proxy for stress would be the success of the projects and  the general process of discussion and improvement.  If that is working, I’ve never seen a real problem with folks too far one way or the other with stress or boredom.

The third metrics James offers up is the “Meeting Promoter Score.”  I love his comment that someone who is not authorized to buy paperclips can call a meeting with expensive engineers and expend thousands of dollars on that meeting.  He suggests rating each meeting on a scale of 1 to 10 and adding in any ideas for improvement.

OK, I’ve seen this method used a lot. A “rate it, record it” approach. This approach of rating a meeting on a scale (or rating any activity on a scale) I’ve not seen as a particularly successful way of measuring or improving any activity.  It is too easy to think we can “fix” things by having people “rate” them. I agree that many places we have way too many meetings which however, in my experience, is related to poor communications and planning — so it is a symptom, not a problem in and of itself.  We’ve gotten rid of a lot of meetings by improving communications and using business rules to replace meetings.

Compare with Meeting Madness Don’t Do It!

Again, your mileage may vary, but I’ve not seen these kind of approaches work well.  I would start with measuring the total number of meetings with the measure of merit being that the total goes down over time.  The average meetings per month for example — keeping in mind that at different stages of the project we might need more or less meetings than average, so trend it over time.  Trending metrics over time is another key notion of using metrics — as a good metric is rarely a simple single number.

Compare with You Need Business Rules Not Meetings

The fourth metric is the “Compound Weekly Learning Rate.”  James recommends:
“So try asking your team this question: how did you get 1% better this week? Did you learn something valuable from our customers, or make a change to our product that drove better results? As your team gets into a learning rhythm, you can review this as a group. 1% per week adds up.”

Ugh again.  I’m always torn by trying to help people be successful while trying not to discourage their creative ideas.  This looks like one of the typical “gee, let’s do this, it should work!”  I am a great advocate of what James calls “relentless learning.”  I love it when I see an engineer off digging into some obscure part of the system, even when it means she is not working on the highest priority activity.  I know these investigations are places where people learn and they spawn innovations and pockets of excellence.  They also help to solve problems that we’ll see in the future.

For more, see supporting your pockets of excellence

I’m also convinced that the reason many efforts fail is because we didn’t understand it fully enough (but, don’t take too long learning, sometimes we do just need to do it).  So the relentless learning is great, but the suggested metric implementation leaves me groaning.

Do I have a better metric solution in this case?  Nope.  Wish I did.  But I wouldn’t do this except as a way of encouraging folks to spend valuable time learning, even at risk to project deadlines (Huh? Really?  Yes).  It doesn’t serve as a metric (how does he get the 1% a week measurement, for example?).  I’ve used staff hour reporting, which was an existing system that was already in place, to realize that I needed to “grow” more experts and that my real experts were limited.

See A Good Use Of Your Staff Hour Metrics

The final and fifth metric was “Positive Feedback Ratio.”  Five to one, positive to negative statements is his recommendation.  Here James doesn’t really propose a metric, just catch people doing good and praise them.  It is clear with this last metric that this and the first four, the use of “metric” is really just a way to highlight the useful things that can be done.  Actually instrumenting and using such metrics doesn’t strike me as compelling or useful in the long run though lots of things can be temporarily useful, while enthusiasm is present.

While these are all great ideas to pursue, the couching of them in metrics is something that I’ve seen as a consistent problem.  These metrics simply don’t “ring true” for me. Metrics for these kind of things would be generally hard and would distract from existing data and metrics that could in fact probably provide the data we need and the improvements we need.

Like so many things, I recommend not implementing a metric or more to the point a measurement unless there are multiple good reasons to do it.  I love all the ideas James presents, but using “metrics” as a delivery mechanism might just perpetuate the misunderstanding of what good metrics are and lead to some typical bad implementations, all in the name of good intentions and good ideas.

For more see Why We Shouldn’t Set Up New Metrics During An Improvement Project

Are your metrics giving you real insights or are they just fuzzy notions whose implementations generate little or no useful information?

Thank you for sharing!

1 thoughts on “Why “Let’s try this” Is Not A Metrics Plan

Comments are closed.