Deconstructing Deming VI – Institute training on the job

6. Institute training on the job.

W Edwards Deming Point 6 of Deming’s 14 Points. I think it was this point that made me realise that everybody projects their own anxieties onto Deming’s writings and finds what they want to find there.

Deming elaborates this point further in Out of the Crisis and several distinct positions emerge. I identify nine. In many ways, the slogan Institute training on the job is no very good description of what Deming was seeking to communicate. Not everything sits well under this heading.

“Training”, along with its sagacious uncle, “education” is one of those things that every one can be in favour of. The systems by which the accumulated knowledge of humanity are communicated, criticised and developed are the foundations of civilisation. But like all accepted truths some scrutiny repays the time and effort. Here are the nine topics I identified in Out of the Crisis.

1. People don’t spend enough on training because the benefits do not show on the balance sheet

This was one of Deming’s targets behind his sixth point. It reiterates a common theme of his. It goes back to the criticisms of Hayes and Abernathy that managers were incapable of understanding their own business. Without such understanding, a manager would lack a narrative to envision the future material rewards of current spending. Cash movements showed on the profit and loss account. The spending became merely an overhead to be attacked so as to enhance the current picture of performance projected by the accounts, the visible figures.

I have considered Hayes and Abernathy’s analysis elsewhere. Whatever the conditions of the early 1980s in the US, I think today’s global marketplace is a very different arena. Organisations vie to invest in their people, as this recent Forbes article shows (though the author can’t spell “bellwether”). True, the article confirms that development spending falls in a recession but cash flow and the availability of working capital are real constraints on a business and have to be managed. Once optimism returns, training spend takes off.

But as US satirist P J O’Rourke observed:

Getting people to give vast amounts of money when there’s no firm idea what that money will do is like throwing maidens down a well. It’s an appeal to magic. And the results are likely to be as stupid and disappointing as the results of magic usually are.

The tragedy of so many corporations is that training budgets are set and value measured on how much money is spent, in the idealistic but sentimental belief that training is an inherent good and that rewards will inevitably flow to those who have faith.

The reality is that it is only within a system of rigorous goal deployment that local training objectives can be identified so as to serve corporate strategy. Only then can training be designed to serve those objectives and only then can training’s value be measured.

2. Root Cause Analysis

The other arena in which the word “training” is guaranteed to turn up is during Root Cause Analysis. It is a moral certainty that somebody will volunteer it somewhere on the Ishikawa diagram. “To stop this happening again, let’s repeat the training.”

Yet, failure of training can never be the root cause of a problem or defect. Such an assertion yields too readily to the question Why did lack of training cause the failure?. The Why? question exposes that there was something the training was supposed to do. It could be that the root cause is readily identified and training put in place as a solution. But, the question could expose that, whatever the perceived past failures in training, the root cause, that the training would have purportedly addressed, remains obscure. Forget worrying about training until the root cause is identified within the system.

In any event, training will seldom be the best way of eliminating a problem. Redesign of the system will always be the first thing to consider.

3. Train managers and new employees

Uncontroversial but I think Deming overstated businesses’ failure to appreciate this.

4. Managers need to understand the company

Uncontroversial but I think Deming overstated businesses’ failure to appreciate this.

5. Managers need to understand variation

So much of Deming’s approach was about rigorous criticism of business data and the diligent separation of signal and noise. Those are topics that certainly have greater salience than a quarter of a century ago. Nate Silver has done much to awaken appetites for statistical thinking and the Six Sigma discipline has alerted the many to the wealth of available tools and techniques. Despite that, I am unpersuaded that genuine statistical literacy and numeracy (both are important) are any more common now than in the days of the first IBM PC.

Deming’s banner headline here is Institute training on the job. I think the point sits uncomfortably. I would have imagined that it is business schools and not employers who should apply their energies to developing and promoting quantitative skills in executives. One of the distractions that has beset industrial statistics is its propensity to create a variety of vernacular approaches with conflicting vocabularies and competing champion priorities: Taguchi methods, Six Sigma, SPC, Shainin, … . The situation is aggravated by the differential enthusiasms between corporations for the individual brands. Even within a single strand such as Six Sigma there is a frustrating variety of nomenclature, content and emphasis.

It’s not training on the job that’s needed. It is the academic industry here that is failing to provide what business needs.

6. Recognise that people learn in different ways

Of this I remain unpersuaded. I do not believe that people learn to drive motor cars in different ways. It can’t be done from theory alone. It can’t be done by writing a song about it. it comes from a subtle interaction of experience and direction. Some people learn without the direction, perhaps because they watch Nelly (see below).

Many have found a resonance between Deming’s point and the Theory of Multiple Intelligences. I fear this has distracted from some of the important themes in business education. As far as I can see, the theory has no real empirical support. Professor John White of the University of London, Institute of Education has firmly debunked the idea (Howard Gardner : the myth of Multiple Intelligences).

7. Don’t rely on watch Nelly

After my academic and vocational training as a lawyer, I followed a senior barrister around for six months, then slightly less closely for another six months. I also went to court and sat behind barristers in their first few years of practice so that I could smell what I would be doing a few months later.

It was important. So was the academic study and so was the classroom vocational training. It comes back to understanding how the training is supposed to achieve its objectives and designing learning from that standpoint.

8. Be inflexible as to work standards

This is tremendously dangerous advice for anybody lacking statistical literacy and numeracy (both).

I will come back to this but it embraces some of my earlier postings on process discipline.

9. Teach customer needs

This is the gem. Employee engagement is a popular concern. Employees who have no sight of how their job impacts the customer, who pays their wages, will soon see the process discipline that is essential to operational excellence as arbitrary and vexatious. Their mindfulness and diligence cannot but be affected by the expectation that they can operate in a cognitive vacuum.

Walter Shewhart famously observed that Data have no meaning apart from their context. By extension, continual re-orientation to the Voice of the Customer gives meaning to structure, process and procedure on the shop floor; it resolves ambiguity as to method in favour of the end-user; it fosters extrinsic, rather than intrinsic, motivation; and it sets the external standard by which conduct and alignment to the business will be judged and governed.

Power cables and ejector seats – two tales of failed risk management

File:RAF Red Arrows - Rhyl Air Show.jpgThe last week has seen findings in two inquests in England that point, I think, to failures in engineering risk management. The first concerns the tragic death of Flight Lieutenant Sean Cunningham. Flight Lieutenant Cunningham was killed by the spontaneous and faulty operation of an ejector seat on his Hawk T1 (this report from the BBC has some useful illustrations).

One particular cause of Flight Lieutenant Cunningham’s death was the failure of the ejector seat parachute to deploy. This was because a single nut and bolt being over tightened. It appears that this risk of over tightening was known to the manufacturer, it says in the news report for some 20 years.

Single-point failure modes such as this, where one thing going wrong can cause disaster, present particular hazards. Usual practice is to pay particular care to ensure that they are designed conservatively, that integrity is robust against special causes, and that manufacture and installation are controlled and predictable. It does surprise me that a manufacturer of safety equipment would permit such a hazard where danger of death could arise from human error in over tightening the nut or simple mechanical problems in the nut and bolt themselves. It is again surprising that the failure mode could not have been designed out. I suspect that we have insufficient information from the BBC. It does seem that the mechanical risk was compounded by the manufacturer’s failure even to warn the RAF of the danger.

Single point failure modes need to be addressed with care, even where institutional and economic considerations obstruct redesign. It is important to realise that human error is never the root cause of any failure. Humans make errors. Systems need to be designed so that they are robust against human frailty and bounded rationality.

File:Pylon ds.jpgThe second case, equally tragic, was that of Dr James Kew. Dr Kew was out running in a field when he was electrocuted by a “low hanging” 11kV power line. When I originally read this I had thought that it was an example of a high impedance fault. Such faults happen where, for example, a power line drops into a tree. Because of the comparatively high electrical impedance of the tree there is insufficient current to activate the circuit breaker and the cable remains dangerously live. Again there is not quite enough information to work out exactly what happened in Dr Kew’s case. However, it appears that the power cable was hanging down in some way rather than having fallen into some other structure.

Again, mechanical failure of a power line that does not activate the circuit breaker is a well anticipated failure mode. It is one that can present a serious hazard to the public but is not particularly easy to eliminate. It certainly seems here that the power company changed its procedures after Dr Kew’s death. There was more they could have done beforehand.

Both tragic deaths illustrate the importance of keeping risk assessments under review and critically re-evaluating them, even in the absence of actual failures. Engineers usually know where their arguments and rationales are thinnest. Just because we decided this was OK in the past, it’s possible that we’ve just been lucky. There is a particular opportunity when new people join the team. That is a great opportunity to challenge orthodoxy and drive risk further out of the system. I wonder whether there should not be an additional column on every FMEA headed “confidence in reasoning”.

Music is silver but …

The other day I came across a report on the BBC website that non-expert listeners could pick out winners of piano competitions more reliably when presented with silent performance videos than when exposed to sound alone. In the latter case they performed no better than chance.

The report was based on the work of Chia-Jung Tsay at University College London, in a paper entitled Sight over sound in the judgment of music performance.

The news report immediately leads us to suspect that the expert evaluating a musical performance is not in fact analysing and weighing auditory complexity and aesthetics but instead falling under the subliminal influence of the proxy data of the artist’s demeanour and theatrics.

That is perhaps unsurprising. We want to believe, as does the expert critic, that performance evaluation is a reflective, analytical and holistic enterprise, demanding decades of exposure to subtle shades of interpretation and developing skills of discrimination by engagement with the ascendant generation of experts. This is what Daniel Kahneman calls a System 2 task. However, a wealth of psychological study shows only too well that System 2 is easily fatigued and distracted. When we believe we are thinking in System 2, we are all too often loafing in System 1 and using simplistic learned heuristics as a substitute. It is easy to imagine that the visual proxy data might be such a heuristic, a ready reckoner that provides a plausible result in a wide variety of commonly encountered situations.

These behaviours are difficult to identify, even for the most mindful individual. Kahneman notes:

… all of us live much of our lives guided by the impressions of System 1 – and we do not know the source of these impressions. How do you know that a statement is true? If it is strongly linked by logic or association to other beliefs or preferences you hold, or comes from a source you trust and like, you will feel a sense of cognitive ease. The trouble is that there may be other causes for your feeling of ease … and you have no simple way of tracing your feelings to their source”

Thinking, Fast and Slow, p64

The problem is that what Kahneman describes is exactly what I was doing in finding my biases confirmed by this press report. I have had a superficial look at the statistics in this study and I am now less persuaded than when I read the press item. I shall maybe blog about this later and the difficulties I had in interpreting the analysis. Really, this is quite a tentative and suggestive study on a very limited frame. I would certainly like to see more inter-laboratory studies in psychology. The study is open to multiple interpretations and any individual will probably have difficulty making an exhaustive list.  There is always a danger of falling into the trap of What You See Is All There Is (WYSIATI).

That notwithstanding, even anecdotally, the story is another reminder of an important lesson of process management that, even though what we have been doing has worked in the past, we may not understand what it is that has been working.