Target and the Targeteers

This blog appeared on the Royal Statistical Society website Statslife on 29 May 2014

DartboardJohn Pullinger, newly appointed head of the UK Statistics Authority, has given a trenchant warning about the “unsophisticated” use of targets. As reported in The Times (London) (“Targets could be skewing the truth, statistics chief warns”, 26 May 2014 – paywall) he cautions:

Anywhere we have had targets, there is a danger that they become an end in themselves and people lose sight of what they’re trying to achieve. We have numbers everywhere but haven’t been well enough schooled on how to use them and that’s where problems occur.

He goes on.

The whole point of all these things is to change behaviour. The trick is to have a sophisticated understanding of what will happen when you put these things out.

Pullinger makes it clear that he is no opponent of targets, but that in the hands of the unskilled they can create perverse incentives, encouraging behaviour that distorts the system they sought to control and frustrating the very improvement they were implemented to achieve.

For example, two train companies are being assessed by the regulator for punctuality. A train is defined as “on-time” if it arrives within 5 minutes of schedule. The target is 95% punctuality.
TrainTargets
Evidently, simple management by target fails to reveal that Company 1 is doing better than Company 2 in offering a punctual service to its passengers. A simple statement of “95% punctuality (punctuality defined as arriving within 5 minutes of timetable)” discards much of the information in the data.

Further, when presented with a train that has slipped outside the 5 minute tolerance, a manager held solely to the target of 95% has no incentive to stop the late train from slipping even further behind. Certainly, if it puts further trains at risk of lateness, there will always be a temptation to strip it of all priority. Here, the target is not only a barrier to effective measurement and improvement, it is a threat to the proper operation of the railway. That is the point that Pullinger was seeking to make about the behaviour induced by the target.

And again, targets often provide only a “snapshot” rather than the “video” that discloses the information in the data that can be used for planning and managing an enterprise.

I am glad that Pullinger was not hesitant to remind users that proper deployment of system measurement requires an appreciation of psychology. Nobel Laureate psychologist Daniel Kahneman warns of the inherent human trait of thinking that What you see is all there is (WYSIATI). On their own, targets do little to guard against such bounded rationality.

In support of a corporate programme of improvement and integrated in a culture of rigorous data criticism, targets have manifest benefits. They communicate improvement priorities. They build confidence between interfacing processes. They provide constraints and parameters that prevent the system causing harm. Harm to others or harm to itself. What is important is that the targets do not become a shield to weak managers who wish to hide their lack of understanding of their own processes behind the defence that “all targets were met”.

However, all that requires some sophistication in approach. I think the following points provide a basis for auditing how an organisation is using targets.

Risk assessment

Targets should be risk assessed, anticipating realistic psychology and envisaging the range of behaviours the targets are likely to catalyse.

Customer focus

Anyone tasked with operating to a target should be periodically challenged with a review of the Voice of the Customer and how their own role contributes to the organisational system. The target is only an aid to the continual improvement of the alignment between the Voice of the Process and the Voice of the Customer. That is the only game in town.

Borrowed validation

Any organisation of any size will usually have independent data of sufficient borrowing strength to support mutual validation. There was a very good recent example of this in the UK where falling crime statistics, about which the public were rightly cynical and incredulous, were effectively validated by data collection from hospital emergency departments (Violent crime in England and Wales falls again, A&E data shows).

Over-adjustment

Mechanisms must be in place to deter over-adjustment, what W Edwards Deming called “tampering”, where naïve pursuit of a target adds variation and degrades performance.

Discipline

Employees must be left in no doubt that lack of care in maintaining the integrity of the organisational system and pursuing customer excellence will not be excused by mere adherence to a target, no matter how heroic.

Targets are for the guidance of the wise. To regard them as anything else is to ask them to do too much.

Deconstructing Deming I – Constancy of Purpose

File:W. Edwards Deming.gifMy 20 December 2013 post on W Edwards Deming attracted quite a lot of interest. The response inspired me to take a detailed look at his ideas 20 years on, starting with his 14 Points.

Deming’s 14 Points for Management are his best remembered takeaway. Deming put them forward as representative of the principles adopted by Japanese industry in its rise from 1950 to the prestigious position it held in manufacturing at the beginning of the 1980s.

Point 1. Create constancy of purpose toward improvement of product and service, with the aim to become competitive and to stay in business, and to provide jobs.

In his 1983 elaboration of the point in Out of the Crisis, Deming explained what he meant. Managing a business was not only about exploiting existing products and processes to generate a stream of profits. It was also about re-inventing those products and processes, innovating and developing to retain and capture market. Deming was fearful that management focused too much on short term profits from existing products and services, and that an effort of leadership was needed to reorient attention and resource towards design and development. The “improvement” that Deming was referring to is that through design and redesign, not simply the incremental improvement of existing value streams. Critically, Deming saw design and redesign as a key business process that should itself be the target of incremental continual improvement. Design and re-design was not an ad hoc project initiated by some rare, once in a generation sea-change in the market or motivated by a startling idea from an employee. It was a routine and “constant” part of the business of a business.

Some of Deming’s latter day followers sometimes deprecate the radical redesign of processes in approaches such as Business Process Re-engineering, and instead promote the incremental improvement of existing processes by those who work in them. That is exactly the approach that Deming was warning against in Point 1.

It is worth recalling the economic and geographic climate within which Deming put forwards this principle. During the early 1980s, the US and Western Europe suffered a significant recession, their populations beset with the dual evils of unemployment and inflation. The economic insecurities were aggravated by social unrest in the West and the intensification of the Cold War.

In 1980 Robert Hayes and William Abernathy, academics at the Harvard Business School, attacked US management in their seminal paper Managing our Way to Economic Decline. They found that fewer and fewer executives were from engineering and operations backgrounds, but increasingly from law and finance. Such managers had little understanding, they said, of the mechanics of the businesses they ran or the markets in which they competed. That in turn led executives to tend to pursue short term profits from existing value streams. These were easy to measure and predict on the visible accounts. However, managers were allegedly ill placed to make informed decisions as to the new products or services that would determine future profits. The uncertainties of such decisions were unknown and unknowable other than to a discipline specialist. Franklin Fisher characterised matters in this way (1989, “Games economists play: a noncooperative view” Rand Journal of Economics 20, 113):

Bright young theorists tend to think of every problem in game theoretic terms, including problems that are easier to think of in other forms.

This all appeared in contrast to Japanese manufacturing industry and in particular Toyota. By 1980, Japanese manufactured goods had come increasingly to dominate global markets. Japanese success was perceived as the (Lawrence Freedman, 2013, Strategy: A History, p531):

… triumph of a focussed, patient, coherent, consensual culture, a reflection of dedicated operational efficiency, or else a combination of the two.

Certainly in my own automotive industry days, my employer had come to see its most successful products literally as commodities. They belatedly realised that, while they had been treating them as a mere income stream, admittedly spent largely on unsuccessful attempts to develop radical new products, Japanese competitors had been filing dozens of patents each year making incremental improvement to design and function, and threatening the company’s core revenues.

But did Deming choose the right target and, in any event, does the exhortation remain cogent? It feels in 2014 as though we all have much more appetite for innovation, invention and product design than we had in 1983. Blogs extol virtues of and strategies for entrepreneurship. Slogans proliferate such as “Fail early, fail fast, fail often”. It is not clear from this web activity whether innovation is being backed by capital. However, the very rate of technological change in society suggests that capital is backing novelty rather than simply engaging in the rent seeking that Hayes and Abernathy feared.

In 2007 Hayes reflected on his 1980 work. He felt that his views had become mainstream and uncontroversial, and been largely adopted in corporations. However, information and globalisation had created a new set of essentials to be addressed and to become part of the general competencies of a manager (“Managing Our Way… A Retrospective by Robert H. Hayes” Harvard Business Review, July-August 2007, 138-149).

I remain unpersuaded that there has been such a broadening in the skill set of managers. The game theorists, data scientists and economists seem to remain in the ascendancy. Whatever change of mind in attitudes to design has taken place, it has happened against a background where CEOs still hop industries. There are other explanations for lack of innovation. Daniel Ellsberg’s principle of ambiguity aversion predicts that quantifiable risks that are apparent from visible accounts will tend to be preferred over ambiguous returns on future inventions, even by subject matter experts. Prevailing comparative advantages may point some corporations away from research. Further, capital flows were particularly difficult in the early 1980s recession. Liberalisation of markets and the rolling back of the state in the 1980s led to more efficient allocation of capital and coincided with a palpable increase in the volume, variety and quality of available consumer goods in the West. There is no guarantee against a failure of strategy. My automotive employer hadn’t missed the importance of new product development but they made a strategic mistake in allocating resources.

Further, psychologist Daniel Kahneman found evidence for a balancing undue optimism about future business, referring to “entrepreneurial delusions” and “competition neglect”, two aspects of What you see is all there is. (Thinking, Fast and Slow, 2011, Chapter 24).

In Notes from Toyota-Land: An American Engineer in Japan (2005), Robert Perrucci and Darius Mehri criticised Toyota’s approach to business. Ironically, Mehri contended that Toyota performed weakly in innovation and encouraged narrow professional skills. It turned out that Japanese management didn’t prevent a collapse in the economy lasting from 1991 to the present. Toyota itself went on to suffer serious reputational damage (Robert E. Cole “What Really Happened to Toyota?” MIT Sloan Management Review, Summer 2011)

So Deming and others were right to draw attention to Western under performance in product design. However, I suspect that the adoption of a more design led culture is largely due to macroeconomic forces rather than exhortations.

There is still much to learn, however, in balancing the opportunities apparent from visible accounts with the uncertainties of imagined future income streams.

I think there remains an important message, perhaps a Point 1 for the 21st Century.

There’s a problem bigger than the one you’re working on. Don’t ignore it!

Do I have to be a scientist to assess food safety?

I saw this BBC item on the web before Christmas: Why are we more scared of raw egg than reheated rice? Just after Christmas seemed like a good time to blog about food safety. Actually, the link I followed asked Are some foods more dangerous that others? A question that has a really easy answer.

However, understanding the characteristic risks of various foods and how most safely to prepare them is less simple. Risk theorist John Adams draws a distinction between readily identified inherent and obvious risks, and risks that can only be perceived with the help of science. Food risks fall into the latter category. As far as I can see, “folk wisdom” is no reliable guide here, even partially. The BBC article refers to risks from rice, pasta and salad vegetables which are not obvious. At the same time, in the UK at least, the risk from raw eggs is very small.

Ironically, raw eggs are one food that springs readily to British people’s minds when food risk is raised, largely due to the folk memory of a high profile but ill thought out declaration by a government minister in the 1980s. This is an example of what Amos Tversky and Daniel Kahneman called an availability heuristic: If you can think of it, it must be important.

Food safety is an environment where an individual is best advised to follow the advice of scientists. We commonly receive this filtered, even if only for accessibility, through government agencies. That takes us back to the issue of trust in bureaucracy on which I have blogged before.

I wonder whether governments are in the best position to provide such advice. It is food suppliers who suffer from the public’s misallocated fears. The egg fiasco of the 1980s had a catastrophic effect on UK egg sales. All food suppliers have an interest in a market characterised by a perception that the products are safe. The food industry is also likely to be in the best position to know what is best practice, to improve such practice, to know how to communicate it to their customers, to tailor it to their products and to provide the effective behavioural “nudges” that promote safe handling. Consumers are likely to be cynical about governments, “one size fits all” advice and cycles of academic meta-analysis.

I think there are also lessons here for organisations. Some risks are assessed on the basis of scientific analysis. It is important that the prestige of that origin is communicated to all staff who will be involved in working with risk. The danger for any organisation is that an individual employee might make a reassessment based on local data and their own self-serving emotional response. As I have blogged before, some individuals have particular difficulty in aligning themselves with the wider organisation.

Of course, individuals must also be equipped with the means of detecting when the assumptions behind the science have been violated and initiating an agile escalation so that employee, customer and organisation can be protected while a reassessment is conducted. Social media provide new ways of sharing experience. I note from the BBC article that, in the UK at least, there is no real data on the origins of food poisoning outbreaks.

So the short answer to the question at the head of this blog still turns out to be “yes”. There are some things where we simply have to rely on science if we want to look after ourselves, our families and our employees.

But even scientists are limited by their own bounded rationality. Science is a work in progress. Using that science itself as a background against which to look for novel phenomena and neglected residual effects leverages that original risk analysis into a key tool in managing, improving and growing a business.

Richard Dawkins champions intelligent design (for business processes)

Richard Dawkins has recently had a couple of bad customer experiences. In each he was confronted with a system that seemed to him indifferent to his customer feedback. I sympathise with him on one matter but not the other. The two incidents do, in my mind, elucidate some important features of process discipline.

In the first, Dawkins spent a frustrating spell ordering a statement from his bank over the internet. He wanted to tell the bank about his experience and offer some suggestions for improvement, but he couldn’t find any means of channelling and communicating his feedback.

Embedding a business process in software will impose a rigid discipline on its operation. However, process discipline is not the same thing as process petrification. The design assumptions of any process include, or should include, the predicted range and variety of situations that the process is anticipated to encounter. We know that the bounded rationality of the designers will blind them to some of the situations that the process will subsequently confront in real world operation. There is no shame in that but the necessary adjunct is that, while the process is operated diligently as designed, data is accumulated on its performance and, in particular, on the customer’s experience. Once an economically opportune moment arrives (I have glossed over quote a bit there) the data can be reviewed, design assumptions challenged and redesign evaluated. Following redesign the process then embarks on another period of boring operation. The “boring” bit is essential to success. Perhaps I should say “mindful” rather than “boring” though I fear that does not really work with software.

Dawkins’ bank have missed an opportunity to listen to the voice of the customer. That weakens their competitive position. Ignorance cannot promote competitiveness. Any organisation that is not continually improving every process for planning, production and service (pace W Edwards Deming) faces the inevitable fact that its competitors will ultimately make such products and services obsolete. As Dawkins himself would appreciate, survival is not compulsory.

Dawkins’ second complaint was that security guards at a UK airport would not allow him to take a small jar of honey onto his flight because of a prohibition on liquids in the passenger cabin. Dawkins felt that the security guard should have displayed “common sense” and allowed it on board contrary to the black letter of the regulations. Dawkins protests against “rule-happy officials” and “bureaucratically imposed vexation”. Dawkins displays another failure of trust in bureaucracy. He simply would not believe that other people had studied the matter and come to a settled conclusion to protect his safety. It can hardly have been for the airport’s convenience. Dawkins was more persuaded by something he had read on the internet. He fell into the trap of thinking that What you see is all there is. I fear that Dawkins betrays his affinities with the cyclist on the railway crossing.

When we give somebody a process to operate we legitimately expect them to do so diligently and with self discipline. The risk of an operator departing from, adjusting or amending a process on the basis of novel local information is that, within the scope of the resources they have for taking that decision, there is no way of reliably incorporating the totality of assumptions and data on which the process design was predicated. Even were all the data available, when Dawkins talks of “common sense” he was demanding what Daniel Kahneman called System 2 thinking. Whenever we demand System 2 thinking ex tempore we are more likely to get System 1 and it is unlikely to perform effectively. The rationality of an individual operator in that moment is almost certainly more tightly bounded than that of the process designers.

In this particular case, any susceptibility of a security guard to depart from process would be exactly the behaviour that a terrorist might seek to exploit once aware of it.

Further, departures from process will have effects on the organisational system, upstream, downstream and collateral. Those related processes themselves rely on the operator’s predictable compliance. The consequence of ill discipline can be far reaching and unanticipated.

That is not to say that the security process was beyond improvement. In an effective process-oriented organisation, operating the process would be only one part of the security guard’s job. Part of the bargain for agreeing to the boring/ mindful diligent operation of the process is that part of work time is spent improving the process. That is something done offline, with colleagues, with the input of other parts of the organisation and with recognition of all the data including the voice of the customer.

Had he exercised the “common sense” Dawkins demanded, the security guard would have risked disciplinary action by his employers for serious misconduct. To some people, threats of sanctions appear at odds with engendering trust in an organisation’s process design and decision making. However, when we tell operators that something is important then fail to sanction others who ignore the process, we undermine the basis of the bond of trust with those that accepted our word and complied. Trust in the bureaucracy and sanctions for non-compliance are complementary elements of fostering process discipline. Both are essential.

The Monty Hall Problem redux

This old chestnut refuses to die and I see that it has turned up again on the BBC website. I have been intending for a while to blog about this so this has given me the excuse. I think that there has been a terrible history of misunderstanding this problem and I want to set down how the confusion comes about. People have mistaken a problem in psychology for a problem in probability.

Here is the classic statement of the problem that appeared in Parade magazine in 1990.

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

The rational way of approaching this problem is through Bayes’ theorem. Bayes’ theorem tells us how to update our views as to the probability of events when we have some new information. In this problem I have never seen anyone start from a position other than that, before any doors are opened, no door is more probably hiding the car than the others. I think it is uncontroversial to say that for each door the probability of its hiding the car is 1/3.

Once the host opens door No. 3, we have some more information. We certainly know that the car is not behind door No. 3 but does the host tell us anything else? Bayes’ theorem tells us how to ask the right question. The theorem can be illustrated like this.
Bayes

The probability of observing the new data, if the theory is correct (the green box), is called the likelihood and plays a very important role in statistics.

Without giving the details of the mathematics, Bayes’ theorem leads us to analyse the problem in this way.

MH1

We can work this out arithmetically but, because all three doors were initially equally probable, the matter comes down to deciding which of the two likelihoods is greater.

MH2

So what are the respective probabilities of the host behaving in the way he did? Unfortunately, this is where we run into problems because the answer depends on the tactic that the host was adopting.

And we are not given that in the question.

Consider some of the following possible tactics the host may have adopted.

  1. Open an unopened door hiding a goat, if both unopened doors have goats, choose at random.
  2. If the contestant chooses door 1 (or 2, or 3), always open 3 (or 1, or 2) whether or not it contains a goat.
  3. Open either unopened door at random but only if contestant has chosen box with prize otherwise don’t open a box (the devious strategy, suggested to me by a former girlfriend as the obviously correct answer).
  4. Choose an unopened door at random. If it hides a goat open it. Otherwise do not open a door (not the same as tactic 1).
  5. Open either unopened door at random whether or not it contains a goat

There are many more. All these various tactics lead to different likelihoods.

Tactic Probability that the host revealed a goat at door 3: Rational choice
given that the car is at 1 given that the car is at 2
1

½

1

Switch
2

1

1

No difference
3

½

0

Don’t switch
4

½

½

No difference
5

½

½

No difference

So if we were given this situation in real life we would have to work out which tactic the host was adopting. The problem is presented as though it is a straightforward maths problem but it critically hinges on a problem in psychology. What can we infer from the host’s choice? What is he up to? I think that this leads to people’s discomfort and difficulty. I am aware that even people who start out assuming Tactic 1 struggle but I suspect that somewhere in the back of their minds they cannot rid themselves of the other possibilities. The seeds of doubt have been sown in the way the problem is set.

A participant in the game show would probably have to make a snap judgment about the meaning of the new data. This is the sort of thinking that Daniel Kahneman calls System 1 thinking. It is intuitive, heuristic and terribly bad at coping with novel situations. Fear of the devious strategy may well prevail.

A more ambitious contestant may try to embark on more reflective analytical System 2 thinking about the likely tactic. That would be quite an achievement under pressure. However, anyone with the inclination may have been able to prepare himself with some pre-show analysis. There may be a record of past shows from which the host’s common tactics can be inferred. The production company’s reputation in similar shows may be known. The host may be displaying signs of discomfort or emotional stress, the “tells” relied on by poker players.

There is a lot of data potentially out there. However, that only leads us to another level of statistical, and psychological, inference about the host’s strategy, an inference that itself relies on its own uncertain likelihoods and prior probabilities. And that then leads to the level of behaviour and cognitive psychology and the uncertainties in the fundamental science of human nature. It seems as though, as philosopher Richard Jeffrey put it, “It’s probabilities all the way down”.

Behind all this, it is always useful advice that, having once taken a decision, it should only be revised if there is some genuinely new data that was surprising given our initial thinking.

Economist G L S Shackle long ago lamented that:

… we habitually and, it seems, unthinkingly assume that the problem facing … a business man, is of the same kind as those set in examinations in mathematics, where the candidate unhesitatingly (and justly) takes it for granted that he has been given enough information to construe a satisfactory solution. Where, in real life, are we justified in assuming that we possess ‘enough’ information?

Music is silver but …

The other day I came across a report on the BBC website that non-expert listeners could pick out winners of piano competitions more reliably when presented with silent performance videos than when exposed to sound alone. In the latter case they performed no better than chance.

The report was based on the work of Chia-Jung Tsay at University College London, in a paper entitled Sight over sound in the judgment of music performance.

The news report immediately leads us to suspect that the expert evaluating a musical performance is not in fact analysing and weighing auditory complexity and aesthetics but instead falling under the subliminal influence of the proxy data of the artist’s demeanour and theatrics.

That is perhaps unsurprising. We want to believe, as does the expert critic, that performance evaluation is a reflective, analytical and holistic enterprise, demanding decades of exposure to subtle shades of interpretation and developing skills of discrimination by engagement with the ascendant generation of experts. This is what Daniel Kahneman calls a System 2 task. However, a wealth of psychological study shows only too well that System 2 is easily fatigued and distracted. When we believe we are thinking in System 2, we are all too often loafing in System 1 and using simplistic learned heuristics as a substitute. It is easy to imagine that the visual proxy data might be such a heuristic, a ready reckoner that provides a plausible result in a wide variety of commonly encountered situations.

These behaviours are difficult to identify, even for the most mindful individual. Kahneman notes:

… all of us live much of our lives guided by the impressions of System 1 – and we do not know the source of these impressions. How do you know that a statement is true? If it is strongly linked by logic or association to other beliefs or preferences you hold, or comes from a source you trust and like, you will feel a sense of cognitive ease. The trouble is that there may be other causes for your feeling of ease … and you have no simple way of tracing your feelings to their source”

Thinking, Fast and Slow, p64

The problem is that what Kahneman describes is exactly what I was doing in finding my biases confirmed by this press report. I have had a superficial look at the statistics in this study and I am now less persuaded than when I read the press item. I shall maybe blog about this later and the difficulties I had in interpreting the analysis. Really, this is quite a tentative and suggestive study on a very limited frame. I would certainly like to see more inter-laboratory studies in psychology. The study is open to multiple interpretations and any individual will probably have difficulty making an exhaustive list.  There is always a danger of falling into the trap of What You See Is All There Is (WYSIATI).

That notwithstanding, even anecdotally, the story is another reminder of an important lesson of process management that, even though what we have been doing has worked in the past, we may not understand what it is that has been working.

Walkie-Talkie “death ray” and risk identification

News media have been full of the tale of London’s Walkie-Talkie office block raising temperatures on the nearby highway to car melting levels.

The full story of how the architects and engineers created the problem has yet to be told. It is certainly the case that similar phenomena have been reported elsewhere. According to one news report, the Walkie-Talkie’s architect had worked on a Las Vegas hotel that caused similar problems back in September 2010.

More generally, an external hazard from a product’s optical properties is certainly something that has been noted in the past. It appears from this web page that domestic low-emissivity (low-E) glass was suspected of setting fire to adjacent buildings as long ago as 2007. I have not yet managed to find the Consumer Product Safety Commission report into low-E glass but I now know all about the hazards of snow globes.

The Walkie-Talkie phenomenon marks a signal failure in risk management and it will cost somebody to fix it. It is not yet clear whether this was a miscalculation of a known hazard or whether the hazard was simply neglected from the start.

Risk identification is the most fundamental part of risk management. If you have failed to identify a risk you are not in a position to control, mitigate or externalise it in advance. Risk identification is also the hardest part. In the case of the Walkie-Talkie, modern materials, construction methods and aesthetic tastes have conspired to create a phenomenon that was not, at least as an accidental feature, present in structures before this century. That means that risk identification is not a matter of running down a checklist of known hazards to see which apply. Novel and emergent risks are always the most difficult to identify, especially where they involve the impact of an artefact on its environment. This is a real, as Daniel Kahneman would put it, System 2 task. The standard checklist propels it back to the flawed System 1 level. As we know, even when we think we are applying a System 2 mindset, me may subconsciously be loafing in a subliminal System 1.

It is very difficult to spot when something has been missed out of a risk assessment, even in familiar scenarios. In a famous 1978 study by Fischhoff, Slovic and others, they showed to college students fault trees analysing potential causes of a car’s failure to start (this is 1978). Some of the fault trees had been “pruned”. One branch, representing say “battery charge”, had been removed. The subjects were very poor at spotting that a major, and well known, source of failure had been omitted from the analysis. Where failure modes are unfamiliar, it is even more difficult to identify the lacuna.

Even where failure modes are identified, if they are novel then they still present challenges in effective design and risk management. Henry Petroski, in Design Paradigms, his historical analysis of human error in structural engineering, shows how novel technologies present challenges for the development of new engineering methodologies. As he says:

There is no finite checklist of rules or questions that an engineer can apply and answer in order to declare that a design is perfect and absolutely safe, for such finality is incompatible with the whole process, practice and achievement of engineering. Not only must engineers preface any state-of-the-art analysis with what has variously been called engineering thinking and engineering judgment, they must always supplement the results of their analysis with thoughtful and considered interpretations of the results.

I think there are three principles that can help guard against an overly narrow vision. Firstly, involve as broad a selection of people as possible in hazard identification. Perhaps, diagonal slice the organisation. Do not put everybody in a room together where they can converge rapidly. This is probably a situation where some variant of the Delphi method can be justified.

Secondly, be aware that all assessments are provisional. Make design assumptions explicit. Collect data at every stage, especially on your assumptions. Compare the data with what you predicted would happen. Respond to any surprises by protecting the customer and investigating. Even if you’ve not yet melted a Jaguar, if the glass is looking a little more reflective than you thought it would be, take immediate action. Do not wait until you are in the Evening Standard. There is a reputation management side to this too.

Thirdly, as Petroski advocates, analysis of case studies and reflection on the lessons of history helps to develop broader horizons and develop a sense of humility. It seems nobody’s life is actually in danger from this “death ray” but the history of failures to identify risk leaves a more tangible record of mortality.