Lean Metrics…again

I think I’ve mentioned any number of times that I always start with selecting a set of metrics whenever I start with a new client.   (For more about that, check my post here.) I recently ran across a new(ish) blog by James Womack, in which he addresses the issue of lean metrics.  In that post, he starts with this imaginary (I suppose) illustration:

It’s nearing the end of the second shift and the plant manager of a large factory is standing in the shipping lot. He’s making sure trucks are in place to get all of the cargo loaded that needs to go out tonight. Is this a good example of an earnest leader who walks his talk to meet customer demand without fail? Well, not exactly.

To grasp the situation, you need one more fact: it’s the last day of the quarter. The cargo must get through the gate by midnight to meet the plant’s sales target (the manager’s key metric), no matter the condition or destination of the goods. Otherwise the manager gets a reprimand from headquarters, no bonus, or worse.

That last sentence gets at what makes it more difficult than it should be, at times, to develop metrics.

I was with the leadership of a new client this week and we were reviewing some metrics that they were developing that would help them track the progress of their lean initiative.  One of the measures was an “actual labor hours vs. estimated labor hours” sort of thing.  That sort of metric can have shortcomings (e.g., perhaps estimated labor hours have no basis in reality) but it’s not a bad place to start given that the price the customer is paying is probably based, at least in part, on the estimated labor hours. In this case, one of teh managers was going on at length as to why the data would be very difficult to gather and the resistance it would prompt from the workforce.

I’ve been through this discussion any number of times.  Womack nails the source of the resistance in his example…traditionally, performance measures meant nothing good for the folks whose performance was being measured.

I had a client a few years ago.  We were collaborating on an “equipment availability” metric.  It was, more or less, an equipment uptime measure.  Part of the development process involved deciding just what “not available” was.  If a press had a problem for five minutes before it was fixed was that “not available time”.  How about 15 minutes?  Or thirty?  Leadership landed on 60 minutes as the “cut off”….less than sixty minutes didn’t count as “not available” while more than 60 minutes counted as an hour of “not available”.   The maintenance and tool and die managers had a fit.  “You mean to tell us, if we have a press or die problem that’s fixed in 61 minutes, it counts against us but if it’s fixed in 59 minutes, it doesn’t count against us?”  No matter how many different ways I went at it, these two managers couldn’t be dissuaded from the notion that measures counted for one or against one, that the only reason for gathering such data was to call them on the carpet when the metric was below standard.

Sadly, most organizations do, in fact, have a “fix the blame” culture around the gathering and use of performance metrics.  In other words, performance measures are used to find fault, scold, warn, and, too often, denigrate and embarrass rather than to motivate or as an element of problem solving and decision making.

Here’s another story, one that illustrates how performance measures can and should be used.

I used to work for Stouffer Hotels.  The company had a “best in class” guest satisfaction measurement process.  Every month, each hotel would get lots of data pulled from guest comment cards (that SH was very energetic in getting guests to fill out) on just about every aspect of a guest’s stay.  As part of our Total Quality Management initiative, we started making these data directly available to supervisors and employees.  (When I first proposed making the data so widely available, some department managers resisted, saying “But if the numbers aren’t good, employee morale might suffer.”  My response: “If they feel badly enough about the numbers to figure out how to improve the numbers, isn’t that what we want?”)  After a month or two, the housekeeping supervisor started stopping by my office frequently about the time of the month the satisfaction report was due to arrive.  When I got them, I made a copy and handed it directly to the supervisor, who ran back to the department and got the housekeepers together to go over the numbers.  When the numbers were good, there were high fives all around.  When the numbers fell, there was discussion as to the causes of the regression and plans to correct it.

Over the next year, the hotel I worked at went from near the bottom of the 41-hotel pack to about the middle…up almost 20 spots.

Deming often talked and wrote about removing the culture of fear.  He knew that fear got in the way of using performance data effectively.