Deconstructing Deming I – Constancy of Purpose

File:W. Edwards Deming.gifMy 20 December 2013 post on W Edwards Deming attracted quite a lot of interest. The response inspired me to take a detailed look at his ideas 20 years on, starting with his 14 Points.

Deming’s 14 Points for Management are his best remembered takeaway. Deming put them forward as representative of the principles adopted by Japanese industry in its rise from 1950 to the prestigious position it held in manufacturing at the beginning of the 1980s.

Point 1. Create constancy of purpose toward improvement of product and service, with the aim to become competitive and to stay in business, and to provide jobs.

In his 1983 elaboration of the point in Out of the Crisis, Deming explained what he meant. Managing a business was not only about exploiting existing products and processes to generate a stream of profits. It was also about re-inventing those products and processes, innovating and developing to retain and capture market. Deming was fearful that management focused too much on short term profits from existing products and services, and that an effort of leadership was needed to reorient attention and resource towards design and development. The “improvement” that Deming was referring to is that through design and redesign, not simply the incremental improvement of existing value streams. Critically, Deming saw design and redesign as a key business process that should itself be the target of incremental continual improvement. Design and re-design was not an ad hoc project initiated by some rare, once in a generation sea-change in the market or motivated by a startling idea from an employee. It was a routine and “constant” part of the business of a business.

Some of Deming’s latter day followers sometimes deprecate the radical redesign of processes in approaches such as Business Process Re-engineering, and instead promote the incremental improvement of existing processes by those who work in them. That is exactly the approach that Deming was warning against in Point 1.

It is worth recalling the economic and geographic climate within which Deming put forwards this principle. During the early 1980s, the US and Western Europe suffered a significant recession, their populations beset with the dual evils of unemployment and inflation. The economic insecurities were aggravated by social unrest in the West and the intensification of the Cold War.

In 1980 Robert Hayes and William Abernathy, academics at the Harvard Business School, attacked US management in their seminal paper Managing our Way to Economic Decline. They found that fewer and fewer executives were from engineering and operations backgrounds, but increasingly from law and finance. Such managers had little understanding, they said, of the mechanics of the businesses they ran or the markets in which they competed. That in turn led executives to tend to pursue short term profits from existing value streams. These were easy to measure and predict on the visible accounts. However, managers were allegedly ill placed to make informed decisions as to the new products or services that would determine future profits. The uncertainties of such decisions were unknown and unknowable other than to a discipline specialist. Franklin Fisher characterised matters in this way (1989, “Games economists play: a noncooperative view” Rand Journal of Economics 20, 113):

Bright young theorists tend to think of every problem in game theoretic terms, including problems that are easier to think of in other forms.

This all appeared in contrast to Japanese manufacturing industry and in particular Toyota. By 1980, Japanese manufactured goods had come increasingly to dominate global markets. Japanese success was perceived as the (Lawrence Freedman, 2013, Strategy: A History, p531):

… triumph of a focussed, patient, coherent, consensual culture, a reflection of dedicated operational efficiency, or else a combination of the two.

Certainly in my own automotive industry days, my employer had come to see its most successful products literally as commodities. They belatedly realised that, while they had been treating them as a mere income stream, admittedly spent largely on unsuccessful attempts to develop radical new products, Japanese competitors had been filing dozens of patents each year making incremental improvement to design and function, and threatening the company’s core revenues.

But did Deming choose the right target and, in any event, does the exhortation remain cogent? It feels in 2014 as though we all have much more appetite for innovation, invention and product design than we had in 1983. Blogs extol virtues of and strategies for entrepreneurship. Slogans proliferate such as “Fail early, fail fast, fail often”. It is not clear from this web activity whether innovation is being backed by capital. However, the very rate of technological change in society suggests that capital is backing novelty rather than simply engaging in the rent seeking that Hayes and Abernathy feared.

In 2007 Hayes reflected on his 1980 work. He felt that his views had become mainstream and uncontroversial, and been largely adopted in corporations. However, information and globalisation had created a new set of essentials to be addressed and to become part of the general competencies of a manager (“Managing Our Way… A Retrospective by Robert H. Hayes” Harvard Business Review, July-August 2007, 138-149).

I remain unpersuaded that there has been such a broadening in the skill set of managers. The game theorists, data scientists and economists seem to remain in the ascendancy. Whatever change of mind in attitudes to design has taken place, it has happened against a background where CEOs still hop industries. There are other explanations for lack of innovation. Daniel Ellsberg’s principle of ambiguity aversion predicts that quantifiable risks that are apparent from visible accounts will tend to be preferred over ambiguous returns on future inventions, even by subject matter experts. Prevailing comparative advantages may point some corporations away from research. Further, capital flows were particularly difficult in the early 1980s recession. Liberalisation of markets and the rolling back of the state in the 1980s led to more efficient allocation of capital and coincided with a palpable increase in the volume, variety and quality of available consumer goods in the West. There is no guarantee against a failure of strategy. My automotive employer hadn’t missed the importance of new product development but they made a strategic mistake in allocating resources.

Further, psychologist Daniel Kahneman found evidence for a balancing undue optimism about future business, referring to “entrepreneurial delusions” and “competition neglect”, two aspects of What you see is all there is. (Thinking, Fast and Slow, 2011, Chapter 24).

In Notes from Toyota-Land: An American Engineer in Japan (2005), Robert Perrucci and Darius Mehri criticised Toyota’s approach to business. Ironically, Mehri contended that Toyota performed weakly in innovation and encouraged narrow professional skills. It turned out that Japanese management didn’t prevent a collapse in the economy lasting from 1991 to the present. Toyota itself went on to suffer serious reputational damage (Robert E. Cole “What Really Happened to Toyota?” MIT Sloan Management Review, Summer 2011)

So Deming and others were right to draw attention to Western under performance in product design. However, I suspect that the adoption of a more design led culture is largely due to macroeconomic forces rather than exhortations.

There is still much to learn, however, in balancing the opportunities apparent from visible accounts with the uncertainties of imagined future income streams.

I think there remains an important message, perhaps a Point 1 for the 21st Century.

There’s a problem bigger than the one you’re working on. Don’t ignore it!

M5 “fireworks crash” – risk identification and reputation management

UK readers will recall this tragic accident in November 2011 when 51 people were injured and seven killed in an accident on a fog bound motorway.

What marked out the accident from a typical collision in fog was the suggestion that the environmental conditions had been exacerbated by smoke that had drifted onto the motorway from a fireworks display at nearby Taunton Rugby Club.

This suggestion excited a lot of press comment. Geoffrey Counsell, the fireworks professional who had been contracted to organise the event, was subsequently charged with manslaughter. The prosecutor’s allegation was that he had fallen so far below the standard or care he purportedly owed to the motorway traffic that a reasonable person would think a criminal sanction appropriate.

It is very difficult to pick out from the press exactly how this whole prosecution unravelled. Firstly the prosecutors resiled from the manslaughter charge, a most serious matter that in the UK can attract a life sentence. They substituted a charge under section 3(2) of the Health and Safety at Work etc. Act 1974 that Mr Counsell had failed “to conduct his undertaking in such a way as to ensure, so far as is reasonably practicable, that … other persons (not being his employees) who may be affected thereby are not thereby exposed to risks to their health or safety.”

There has been much commentary from judges and others on the meaning of “reasonably practicable” but suffice to say, for the purposes of this blog, that a self employed person is required to make substantial effort in protecting the public. That said, the section 3 offence carries a maximum sentence of no more than two years’ imprisonment.

The trial on the section 3(2) indictment opened on 18 November 2013. “Serious weaknesses” in the planning of the event were alleged. There were vague press reports about Mr Counsell’s risk assessment but insufficient for me to form any exact view. It does seem that he had not considered smoke drifting onto the motorway and interacting with fog to create an especial hazard to drivers.

A more worrying feature of the prosecution was the press suggestion that an expert meteorologist had based his opinion on a biased selection of witness statements that he had been provided with and which described which way the smoke from the fireworks display had been drifting. I only have the journalistic account of the trial but it looks far from certain that the smoke did in fact drift towards the motorway.

In any event, on 10 December 2013, following the close of the prosecution evidence, the judge directed the jury to acquit Mr Counsell. The prosecutors had brought forward insufficient evidence against Mr Counsell for a jury reasonably to return a conviction, even without any evidence in his defence.

An individual, no matter how expert, is at a serious disadvantage in identifying novel risks. An individual’s bounded rationality will always limit the futures he can conjure and the weight that he gives to them. To be fair to Mr Counsell, he says that he did seek input from the Highways Agency, Taunton Deane Borough Council and Avon and Somerset Police but he says that they did not respond. If that is the case, I am sure that those public bodies will now reflect on how they could have assisted Mr Counsell’s risk assessment the better to protect the motorists and, in fact, Mr Counsell. The judge’s finding, that this was an accident that Mr Counsell could not reasonably have foreseen, feels like a just decision.

Against that, hypothetically, had the fireworks been set by a household name corporation, they would rightly have felt ashamed at not having anticipated the risk and taken any necessary steps to protect the motorway drivers. There would have been reputational damage. A sufficient risk assessment would have provided the basis for investigating whether the smoke was in fact a cause of the accident and, where appropriate, advancing a robust and persuasive rebuttal of blame.

That is the power of risk assessment. Not only is it a critical foundational element of organisational management, it provides a powerful tool in managing reputation and litigation risk. Unfortunately, unless there is a critical mass of expertise dedicated to risk identification it is more likely that it will provide a predatory regulator with evidence of slipshod practice. Its absence is, of course, damning.

As a matter of good business and efficient leadership, the Highways Agency, Taunton Deane Borough Council, and Avon and Somerset Police ought to have taken Mr Counsell’s risk assessment seriously if they were aware of it. They would surely have known that they were in a better position than Mr Counsell to assess risks to motorists. Fireworks displays are tightly regulated in the UK yet all such regulation has failed to protect the public in this case. Again, I think that the regulators might look to their own role.

Organisations must be aware of external risks. Where they are not engaged with the external assessment of such risks they are really in an oppositional situation that must be managed accordingly. Where they are engaged the external assessments must become integrated into their own risk strategy.

It feels as though Mr Counsell has been unjustly singled out in this tragic matter. There was a rush to blame somebody and I suspect that an availability heuristic was at work. Mr Counsellor attracted attention because the alleged causation of the accident seemed so exotic and unusual. The very grounds on which the court held him blameless.

Trust in bureaucracy I – the Milgram experiments

I have recently been reading Gina Perry’s book Behind the Shock Machine which analyses, criticises and re-assesses the “obedience” experiments of psychologist Stanley Milgram performed in the early 1960s. For the uninitiated there is a brief description of the experiments on Dr Perry’s website. You can find a video of the experiments here.

The experiments have often been cited as evidence for a constitutional human bias towards compliance in the face of authority. From that interpretation has grown a doctrine that the atrocities of war and of despotism are enabled by the common man’s (sic) unresistsing obedience to even a nominal superior, and further that inherent cruelty is eager to express itself under the pretext of an order.

Perry mounts a detailed challenge to the simplicity of that view. In particular, she reveals how Milgram piloted his experiments and fine tuned them so that they would produce the most signal obedient behaviour. The experiments took place within the context of academic research. The experimenter did everything to hold himself out as the representative of an overwhelmingly persuasive body of scientific knowledge. At every stage the experimenter reassured the subject and urged them to proceed. Given this real pressure applied to the experimental subjects, even a 65% compliance rate was hardly dramatic. Most interestingly, the actual reaction of the subjects to their experience was complex and ambiguous. It was far from the conventional view of the cathartic release of supressed violence facilitated by a directive from a figure with a superficial authority. Erich Fromm made some similar points about the experiments in his 1973 book The Anatomy of Human Destructiveness.

What interests me about the whole affair is its relevance to an issue which I have raised before on this blog: trust in bureaucracy. Max Weber was one of the first sociologists to describe how modern societies and organisations rely on a bureaucracy, an administrative policy-making group, to maintain the operation of complex dynamic systems. Studies of engineering and science as bureaucratic professions include Diane Vaughan’s The Challenger Launch Decision.

The majority of Milgram’s subjects certainly trusted the bureaucracy represented by the experimenter, even in the face of their own fears that they were doing harm. This is a stark contrast to some failures of such trust that I have blogged about here. By their mistrust, the cyclist on the railway crossing and the parents who rejected the MMR vaccination placed themselves and others in genuine peril. These were people who had, as far as I have been able to discover, no compelling evidence that the engineers who designed the railway crossing or the scientists who had tested the MMR vaccine might act against their best interests.

So we have a paradox. The majority of Milgram’s subjects ignored their own compelling fears and trusted authority. The cyclist and the parents recklessly ignored or actively mistrusted authority without a well developed alternative world view. Whatever our discomfort with Milgram’s demonstrations of obedience we feel no happier with the cyclist’s and parents’ disobedience. Prof Jerry M Burger partially repeated Milgram’s experiments in 2007. He is quoted by Perry as saying:

It’s not as clear cut as it seems from the outside. When you’re in that situation, wondering, should I continue or should I not, there are reasons to do both. What you do have is an expert in the room who knows all about this study and presumably has been through this many times before with many participants, and he’s telling you, there’s nothing wrong. The reasonable, rational thing to do is to listen to the guy who’s the expert when you’re not sure what to do.

Organisations depend on a workforce aligned around trust in that organisation’s policy and decision making machinery. Even in the least hierarchical of organisations, not everybody gets involved in every decision. Whether it’s the decision of a co-worker with an exotic expertise or the policy of a superior in the hierarchy, compliance and process discipline will succeed or fail on the basis of trust.

The “trust” that Milgram’s subjects showed towards the experimenter was manufactured and Perry discusses how close the experiment ran to acceptable ethical standards.

Organisations cannot rely on such manufactured “trust”. Breakdown of trust among employees is a major enterprise risk for most organisations. The trust of customers is essential to reputation. A key question in all decision making is whether the outcome will foster trust or destroy it.

Walkie-Talkie “death ray” and risk identification

News media have been full of the tale of London’s Walkie-Talkie office block raising temperatures on the nearby highway to car melting levels.

The full story of how the architects and engineers created the problem has yet to be told. It is certainly the case that similar phenomena have been reported elsewhere. According to one news report, the Walkie-Talkie’s architect had worked on a Las Vegas hotel that caused similar problems back in September 2010.

More generally, an external hazard from a product’s optical properties is certainly something that has been noted in the past. It appears from this web page that domestic low-emissivity (low-E) glass was suspected of setting fire to adjacent buildings as long ago as 2007. I have not yet managed to find the Consumer Product Safety Commission report into low-E glass but I now know all about the hazards of snow globes.

The Walkie-Talkie phenomenon marks a signal failure in risk management and it will cost somebody to fix it. It is not yet clear whether this was a miscalculation of a known hazard or whether the hazard was simply neglected from the start.

Risk identification is the most fundamental part of risk management. If you have failed to identify a risk you are not in a position to control, mitigate or externalise it in advance. Risk identification is also the hardest part. In the case of the Walkie-Talkie, modern materials, construction methods and aesthetic tastes have conspired to create a phenomenon that was not, at least as an accidental feature, present in structures before this century. That means that risk identification is not a matter of running down a checklist of known hazards to see which apply. Novel and emergent risks are always the most difficult to identify, especially where they involve the impact of an artefact on its environment. This is a real, as Daniel Kahneman would put it, System 2 task. The standard checklist propels it back to the flawed System 1 level. As we know, even when we think we are applying a System 2 mindset, me may subconsciously be loafing in a subliminal System 1.

It is very difficult to spot when something has been missed out of a risk assessment, even in familiar scenarios. In a famous 1978 study by Fischhoff, Slovic and others, they showed to college students fault trees analysing potential causes of a car’s failure to start (this is 1978). Some of the fault trees had been “pruned”. One branch, representing say “battery charge”, had been removed. The subjects were very poor at spotting that a major, and well known, source of failure had been omitted from the analysis. Where failure modes are unfamiliar, it is even more difficult to identify the lacuna.

Even where failure modes are identified, if they are novel then they still present challenges in effective design and risk management. Henry Petroski, in Design Paradigms, his historical analysis of human error in structural engineering, shows how novel technologies present challenges for the development of new engineering methodologies. As he says:

There is no finite checklist of rules or questions that an engineer can apply and answer in order to declare that a design is perfect and absolutely safe, for such finality is incompatible with the whole process, practice and achievement of engineering. Not only must engineers preface any state-of-the-art analysis with what has variously been called engineering thinking and engineering judgment, they must always supplement the results of their analysis with thoughtful and considered interpretations of the results.

I think there are three principles that can help guard against an overly narrow vision. Firstly, involve as broad a selection of people as possible in hazard identification. Perhaps, diagonal slice the organisation. Do not put everybody in a room together where they can converge rapidly. This is probably a situation where some variant of the Delphi method can be justified.

Secondly, be aware that all assessments are provisional. Make design assumptions explicit. Collect data at every stage, especially on your assumptions. Compare the data with what you predicted would happen. Respond to any surprises by protecting the customer and investigating. Even if you’ve not yet melted a Jaguar, if the glass is looking a little more reflective than you thought it would be, take immediate action. Do not wait until you are in the Evening Standard. There is a reputation management side to this too.

Thirdly, as Petroski advocates, analysis of case studies and reflection on the lessons of history helps to develop broader horizons and develop a sense of humility. It seems nobody’s life is actually in danger from this “death ray” but the history of failures to identify risk leaves a more tangible record of mortality.