Walkie-Talkie “death ray” and risk identification

News media have been full of the tale of London’s Walkie-Talkie office block raising temperatures on the nearby highway to car melting levels.

The full story of how the architects and engineers created the problem has yet to be told. It is certainly the case that similar phenomena have been reported elsewhere. According to one news report, the Walkie-Talkie’s architect had worked on a Las Vegas hotel that caused similar problems back in September 2010.

More generally, an external hazard from a product’s optical properties is certainly something that has been noted in the past. It appears from this web page that domestic low-emissivity (low-E) glass was suspected of setting fire to adjacent buildings as long ago as 2007. I have not yet managed to find the Consumer Product Safety Commission report into low-E glass but I now know all about the hazards of snow globes.

The Walkie-Talkie phenomenon marks a signal failure in risk management and it will cost somebody to fix it. It is not yet clear whether this was a miscalculation of a known hazard or whether the hazard was simply neglected from the start.

Risk identification is the most fundamental part of risk management. If you have failed to identify a risk you are not in a position to control, mitigate or externalise it in advance. Risk identification is also the hardest part. In the case of the Walkie-Talkie, modern materials, construction methods and aesthetic tastes have conspired to create a phenomenon that was not, at least as an accidental feature, present in structures before this century. That means that risk identification is not a matter of running down a checklist of known hazards to see which apply. Novel and emergent risks are always the most difficult to identify, especially where they involve the impact of an artefact on its environment. This is a real, as Daniel Kahneman would put it, System 2 task. The standard checklist propels it back to the flawed System 1 level. As we know, even when we think we are applying a System 2 mindset, me may subconsciously be loafing in a subliminal System 1.

It is very difficult to spot when something has been missed out of a risk assessment, even in familiar scenarios. In a famous 1978 study by Fischhoff, Slovic and others, they showed to college students fault trees analysing potential causes of a car’s failure to start (this is 1978). Some of the fault trees had been “pruned”. One branch, representing say “battery charge”, had been removed. The subjects were very poor at spotting that a major, and well known, source of failure had been omitted from the analysis. Where failure modes are unfamiliar, it is even more difficult to identify the lacuna.

Even where failure modes are identified, if they are novel then they still present challenges in effective design and risk management. Henry Petroski, in Design Paradigms, his historical analysis of human error in structural engineering, shows how novel technologies present challenges for the development of new engineering methodologies. As he says:

There is no finite checklist of rules or questions that an engineer can apply and answer in order to declare that a design is perfect and absolutely safe, for such finality is incompatible with the whole process, practice and achievement of engineering. Not only must engineers preface any state-of-the-art analysis with what has variously been called engineering thinking and engineering judgment, they must always supplement the results of their analysis with thoughtful and considered interpretations of the results.

I think there are three principles that can help guard against an overly narrow vision. Firstly, involve as broad a selection of people as possible in hazard identification. Perhaps, diagonal slice the organisation. Do not put everybody in a room together where they can converge rapidly. This is probably a situation where some variant of the Delphi method can be justified.

Secondly, be aware that all assessments are provisional. Make design assumptions explicit. Collect data at every stage, especially on your assumptions. Compare the data with what you predicted would happen. Respond to any surprises by protecting the customer and investigating. Even if you’ve not yet melted a Jaguar, if the glass is looking a little more reflective than you thought it would be, take immediate action. Do not wait until you are in the Evening Standard. There is a reputation management side to this too.

Thirdly, as Petroski advocates, analysis of case studies and reflection on the lessons of history helps to develop broader horizons and develop a sense of humility. It seems nobody’s life is actually in danger from this “death ray” but the history of failures to identify risk leaves a more tangible record of mortality.

Trust in data – I

I was listening to the BBC’s election coverage on 2 May (2013) when Nick Robinson announced that UKIP supporters were five times more likely than other voters to believe that the MMR vaccine was dangerous.

I had a search on the web. The following graphic had appeared on Mike Smithson’s PoliticalBetting blog on 21 April 2013.

MMR plot

It’s not an attractive bar chart. The bars are different colours. There is a “mean” bar that tends to make the variation look less than it is and makes the UKIP bar (next to it) look more extreme. I was, however, intrigued so I had a look for the original data which had come from a YouGov survey of 1765 respondents. You can find the data here.

Here is a summary of the salient points of the data from the YouGov website in a table which I think is less distracting than the graphic.

Voting   intention Con. Lab. Lib. Dem. UKIP
No. Of   respondents 417 518 142 212
% % % %
MMR safe 99 85 84 72
MMR unsafe 1 3 12 28
Don’t know 0 12 3 0

My first question was: Where had Nick Robinson and Mike Smithson got their numbers from? It is possible that there was another survey I have not found. It is also possible that I am being thick. In any event, the YouGov data raises some interesting questions. This is an exploratory date analysis exercise. We are looking for interesting theories. I don’t think there is any doubt that there is a signal in this data. How do we interpret it? There does look to be some relationship between voting intention and attitude to public safety data.

Should anyone be tempted to sneer at people with political views other than their own, it is worth remembering that it is unlikely that anyone surveyed had scrutinised any of the published scientific research on the topic. All will have digested it, most probably at third hand, through the press, internet, or cooler moment. They may not have any clear idea of the provenance of the assurances as to the vaccination’s safety. They may not have clearly identified issues as to whether what they had absorbed was a purportedly independent scientific study or a governmental policy statement that sought to rely on the science. I suspect that most of my readers have given it no more thought.

The mental process behind the answers probably wouldn’t withstand much analysis. This would be part of Kahneman’s System 1 thinking. However, the question of how such heuristics become established is an interesting one. I suspect there is a factor here that can be labelled “trust in data”.

Trust in data is an issue we all encounter, in business and in life. How do we know when we can trust data?

A starting point for many in this debate is the often cited observation of Brian Joiner that, when presented with a numerical target, a manager has three options: Manage the system so as to achieve the target, distort the system so the target is achieved but at the cost of performance elsewhere (possibly not on the dashboard), or simply distort the data. This, no doubt true, observation is then cited in support of the general proposition that management by numerical target is at best ineffective and at worst counter productive. John Seddon is a particular advocate of the view that, whatever benefits may flow from management by target (and they are seldom championed with any great energy), they are outweighed by the inevitable corruption of the organisation’s data generation and reporting.

It is an unhappy view. One immediate objection is that the broader system cannot operate without targets. Unless the machine part’s diameter is between 49.99 and 50.01 mm it will not fit. Unless chlorine concentrations are below the safe limit, swimmers risk being poisoned. Unless demand for working capital is cut by 10% we will face the consequences of insolvency. Advocates of the target free world respond that those matters can be characterised as the legitimate voice of the customer/ business. It is only arbitrary targets that are corrosive.

I am not persuaded that the legitimate/ arbitrary distinction is a real one, nor how the distinction motivates two different kinds of behaviour. I will blog more about this later. Leadership’s urgent task is to ensure that all managers have the tools to measure present reality and work to improve it. Without knowing how much improvement is essential a manager cannot make rational decisions about the allocation of resources. In that context, when the correct management control is exercised, improving the system is easier than cheating. I shall blog about goal deployment and Hoshin Kanri on another occasion.

Trust in data is just a factor of trust in general. In his popular book on evolutionary psychology and economics, The Origins of Virtue, Matt Ridley observes the following.

Trust is as vital a form of social capital as money is a form of actual capital. … Trust, like money, can be lent (‘I trust you because I trust the person who told me he trusts you’), and can be risked, hoarded or squandered. It pays dividends in the currency of more trust.

Within an organisation, trust in data is something for everybody to work on building collaboratively under diligent leadership. As to the public sphere, trust in data is related to trust in politicians and that may be a bigger problem to solve. It is also a salutary warning as to what happens when there is a failure of trust in leadership.

George Box and Response Surface Methods

News of George Box’s death escaped me while I was on vacation earlier this year and I thought it about time I commented on a huge statistical career. There are plenty of thorough obituaries on the web and I’m sure that the RSS will do a splendid job in due course. It is sad that there was no obituary in the Fleet Street press for somebody who has made such an eminent contribution to science and technology. Box’s particular talents were formed through his English training and learning on the job. Perhaps his neglect on the national stage is a measure of the extent to which the biggest ideas work gradually and organically, away from the grandstanding of the celebrity culture.

The word statistician feels inadequate to describe Box’s work. He was a man actively engaged in seeking novel methodologies for solving practical problems. Many of his solutions embraced what we conventionally think of as statistics. However, his work always seems that of somebody who looked for methodological solutions and sometimes found them in statistics, rather than a statistician looking to sell his product. Box described himself as “an accidental statistician” and I have a soft spot for people who arrive at their destinations by unconventional routes.

I was spurred on to reflect on Box’s work by Tim Davis’ worthwhile advocacy of dimensional analysis in experimental design;. Box himself lamented that engineers are often hypnotised when adopting statistical tools and discard their engineering knowledge in the process with little regret. There is a gap between the engineer incubating a, possibly ill-formed, problem and a statistically inspired structured investigation. Sometimes it’s a hazardous leap between the two. From the far side it’s sometimes difficult to look back and see what motivated the investigation. The more bridges we can find across that gap the better. I think few have approached the effectiveness with which Box pontificated (in the exact sense of the word).

One of Box’s greatest contributions was his advocacy of Response Surface Methods (“RSMs”). I think some of my most enjoyable statistical experiences were back in my automotive industry days when we were using RSMs with computer models to optimise design details on mechanical components. We were looking to improve durability and reduce warranty costs. I recall one situation where we exploited an elastic-plastic model of a feature that took 16 hours to run on the company’s CRAY supercomputer, a situation where even computer experiments needed a structured investigation.

As I said, Tim had got me thinking and I returned to a frustrating book that I have put down years ago: Walter G Vinenti’s What Engineers Know and How They Know It (1990, Johns Hopkins UP). Vincenti was an eminent aerospace engineer and the book is a fascinating history of a number of notable events in aerospace design. I do have a problem with this book. Vincenti seems rather dismissive of statistics. There are no statisticians in the index! There is however a compelling chapter on W F Durand and E P Lesley’s First World War propeller experiments. These were executed through quite a nice little factorial design. Durand’s trials and tribulations in managing the experimentation show that really the statistics is the easy bit. You can find the full report here. It is well worth reading.

Vincenti is rather dismissive of Durand’s statistical skill and relegates it to a footnote. He doesn’t really acknowledge Durand’s methodological sophistication. The truly frustrating thing about the book is the difficulty in drawing generalised conclusions that answer the question in the title. However, Vincenti does come up with the suggestion that “parameter variation” in the broadest sense is a key part of the engineering learning process. I think it’s a disappointing takeaway as his descriptive part of the book is much richer than the conclusion suggests. Perhaps I will come back to this.

One of Box’s key insights was engineers’ need for immediacy and sequentiality in the parameter variation process (“Statistics as a Catalyst to Learning by Scientific Method Part II – A Discussion”, Journal of Quality Technology, 31(1), 1999, pp16-29).

Psychologist Daniel Kahneman has described two ways of thinking that typify human decision making. System 1 is instinctive, fluent, heuristic and integrated with the experience base. System 1 is over confident and often leads us astray. System 2 employs reflective considered analysis. It can, when properly guided by statistical theory, guard against the hazards of System 1. Problems such as “What factors determine this process output?” are difficult. Kahneman observes that often, when confronted with difficult problems, System 1 substitutes a simpler problem such as “What factors are we currently relying on to control this process?”. Experts think they are answering the first question when they are in fact answering the second. Box’s requirement for immediacy allows engineers to exploit their, sometimes misleading, experience base while subjecting it to a rigorous experimental test in a rapid and efficient manner.

Experimental results feed into System 2 thinking. However, the human mind is still much too confident in adopting explanations that are in reality merely plausible rather than probable. The requirement of sequentiality allows analysis of those explanations in a rich and diverse context that puts them to a rigorous test.

One of the fascinations of engineering research is exploring the partially known. Jon Schmidt made the following remark about structural engineering but I think it applies to engineering in general. I t certainly applied to mechanical engineering in my automotive days.

Structural engineering is the art of modelling materials we do not wholly understand into shapes we cannot precisely analyse so as to withstand forces we cannot properly assess in such a way that the public at large has no reason to suspect the extent of our ignorance.

The application of statistics, and in particular RSMs, to engineering is one of the great tools we have for decision making under uncertainty. Modern psychology has tended to confirm Box’s instincts, learned on the job, about the tools that best support human decision making and guard against its inadequacies. Box remains a role model in developing strategies for operational excellence.