Month: October 2013

Deming and the 95%

A few people have commented on twitter about Deming’s contention that 95% of what you see is down to the system and only 5% is down to the individual. They keep asking where he got the 95% from.

He used to run a course, which is still available as a book: Four Days with Doctor Deming. At the very beginning of the course he invited the participants to play a game with red and white beads. There’s a youtube video describing it here. The you pick up beads with paddles, the workers are rewarded for picking up white beads and punished for red, even ‘fired’.

When you look at the numbers for the game it turns out that the workers have no control over the physical process of selecting the beads. In one trial someone may do really well, and in another badly and get ‘fired’.

In statistics we use confidence limits. When we take a new measurement you look at it and can perform some analysis to say how likely it is that the measurement falls inside the existing set of measurements. Typically we use the 95% confidence limit to say that there is a one in twenty chance that the measurement doesn’t fit what we were expecting. If it falls inside then we have what’s known as the null hypothesis, that there isn’t any noticeable effect for that particular measurement.

Deming’s point was that in most processes any effect that the individual may have is swamped by the system their are part of, in fact the variability they cause is just part of that system overall. So if measurements are falling within the confidence limits you can’t point to one person over another and say they’re better or worse.

So changing things, for better or worse, mean you have to change the entire system. It’s an iron rule, the system always wins. No amount of arbitrary targets, shouting, or wishing can change this. I cover this in more detail in the video talks on my consultancy website.

A life without the standards police

In any complicated endeavour there are some things that can really cause you to get a little stuck and working on stuff that’s just not important. When working with teams it’s pretty important to ensure that each team has a similar approach to things like how they realise architecture, lay out applications and code, which of the excellent JavaScript libraries they use and so on. Otherwise it starts to create silos and mean that there is waste when you try to manage the flow across teams.

One way of doing this is to have some kind of gatekeeper who inspects everything and makes sure it complies. This is the standard backwards command and control way of assuring quality: do the inspection after you can perform any intervention that might resolve the problem. It also means you have guaranteed that there will be a bottleneck. Of course, you can have it in front of any work, where there is some kind of oversight. This just means the delay is upstream of any value instead of down.

Instead, there’s an old trick, which is to have swat teams. These are teams made up of experienced individuals who know each other well and work together. Their job is to start new projects, but not finish them. They also work with teams that are stuck or lagging to help them get past whatever problems they have.

The swat team hands off to the less experienced team that will finish the job, it means that there is already a body of knowledge of how we approach things, what the architecture is, etc. built into the project. Of course, this will require pairing and patience. But it means that work gets started well, in a standard way, with all of the levers for continuous improvement in place, by people who know how to do this. Then it’s picked up by others who learn what the standard approach is, without a gatekeeper and a lot of bureaucracy.

This is a team version of staff liquidity, which is discussed in the Commitment book. Staff liquidity uses experts’ knowledge and skills in such a way as to make sure there is enough slack to deal with emergencies and still get the work done.