Do I have to be a scientist to assess food safety?

I saw this BBC item on the web before Christmas: Why are we more scared of raw egg than reheated rice? Just after Christmas seemed like a good time to blog about food safety. Actually, the link I followed asked Are some foods more dangerous that others? A question that has a really easy answer.

However, understanding the characteristic risks of various foods and how most safely to prepare them is less simple. Risk theorist John Adams draws a distinction between readily identified inherent and obvious risks, and risks that can only be perceived with the help of science. Food risks fall into the latter category. As far as I can see, “folk wisdom” is no reliable guide here, even partially. The BBC article refers to risks from rice, pasta and salad vegetables which are not obvious. At the same time, in the UK at least, the risk from raw eggs is very small.

Ironically, raw eggs are one food that springs readily to British people’s minds when food risk is raised, largely due to the folk memory of a high profile but ill thought out declaration by a government minister in the 1980s. This is an example of what Amos Tversky and Daniel Kahneman called an availability heuristic: If you can think of it, it must be important.

Food safety is an environment where an individual is best advised to follow the advice of scientists. We commonly receive this filtered, even if only for accessibility, through government agencies. That takes us back to the issue of trust in bureaucracy on which I have blogged before.

I wonder whether governments are in the best position to provide such advice. It is food suppliers who suffer from the public’s misallocated fears. The egg fiasco of the 1980s had a catastrophic effect on UK egg sales. All food suppliers have an interest in a market characterised by a perception that the products are safe. The food industry is also likely to be in the best position to know what is best practice, to improve such practice, to know how to communicate it to their customers, to tailor it to their products and to provide the effective behavioural “nudges” that promote safe handling. Consumers are likely to be cynical about governments, “one size fits all” advice and cycles of academic meta-analysis.

I think there are also lessons here for organisations. Some risks are assessed on the basis of scientific analysis. It is important that the prestige of that origin is communicated to all staff who will be involved in working with risk. The danger for any organisation is that an individual employee might make a reassessment based on local data and their own self-serving emotional response. As I have blogged before, some individuals have particular difficulty in aligning themselves with the wider organisation.

Of course, individuals must also be equipped with the means of detecting when the assumptions behind the science have been violated and initiating an agile escalation so that employee, customer and organisation can be protected while a reassessment is conducted. Social media provide new ways of sharing experience. I note from the BBC article that, in the UK at least, there is no real data on the origins of food poisoning outbreaks.

So the short answer to the question at the head of this blog still turns out to be “yes”. There are some things where we simply have to rely on science if we want to look after ourselves, our families and our employees.

But even scientists are limited by their own bounded rationality. Science is a work in progress. Using that science itself as a background against which to look for novel phenomena and neglected residual effects leverages that original risk analysis into a key tool in managing, improving and growing a business.

Richard Dawkins champions intelligent design (for business processes)

Richard Dawkins has recently had a couple of bad customer experiences. In each he was confronted with a system that seemed to him indifferent to his customer feedback. I sympathise with him on one matter but not the other. The two incidents do, in my mind, elucidate some important features of process discipline.

In the first, Dawkins spent a frustrating spell ordering a statement from his bank over the internet. He wanted to tell the bank about his experience and offer some suggestions for improvement, but he couldn’t find any means of channelling and communicating his feedback.

Embedding a business process in software will impose a rigid discipline on its operation. However, process discipline is not the same thing as process petrification. The design assumptions of any process include, or should include, the predicted range and variety of situations that the process is anticipated to encounter. We know that the bounded rationality of the designers will blind them to some of the situations that the process will subsequently confront in real world operation. There is no shame in that but the necessary adjunct is that, while the process is operated diligently as designed, data is accumulated on its performance and, in particular, on the customer’s experience. Once an economically opportune moment arrives (I have glossed over quote a bit there) the data can be reviewed, design assumptions challenged and redesign evaluated. Following redesign the process then embarks on another period of boring operation. The “boring” bit is essential to success. Perhaps I should say “mindful” rather than “boring” though I fear that does not really work with software.

Dawkins’ bank have missed an opportunity to listen to the voice of the customer. That weakens their competitive position. Ignorance cannot promote competitiveness. Any organisation that is not continually improving every process for planning, production and service (pace W Edwards Deming) faces the inevitable fact that its competitors will ultimately make such products and services obsolete. As Dawkins himself would appreciate, survival is not compulsory.

Dawkins’ second complaint was that security guards at a UK airport would not allow him to take a small jar of honey onto his flight because of a prohibition on liquids in the passenger cabin. Dawkins felt that the security guard should have displayed “common sense” and allowed it on board contrary to the black letter of the regulations. Dawkins protests against “rule-happy officials” and “bureaucratically imposed vexation”. Dawkins displays another failure of trust in bureaucracy. He simply would not believe that other people had studied the matter and come to a settled conclusion to protect his safety. It can hardly have been for the airport’s convenience. Dawkins was more persuaded by something he had read on the internet. He fell into the trap of thinking that What you see is all there is. I fear that Dawkins betrays his affinities with the cyclist on the railway crossing.

When we give somebody a process to operate we legitimately expect them to do so diligently and with self discipline. The risk of an operator departing from, adjusting or amending a process on the basis of novel local information is that, within the scope of the resources they have for taking that decision, there is no way of reliably incorporating the totality of assumptions and data on which the process design was predicated. Even were all the data available, when Dawkins talks of “common sense” he was demanding what Daniel Kahneman called System 2 thinking. Whenever we demand System 2 thinking ex tempore we are more likely to get System 1 and it is unlikely to perform effectively. The rationality of an individual operator in that moment is almost certainly more tightly bounded than that of the process designers.

In this particular case, any susceptibility of a security guard to depart from process would be exactly the behaviour that a terrorist might seek to exploit once aware of it.

Further, departures from process will have effects on the organisational system, upstream, downstream and collateral. Those related processes themselves rely on the operator’s predictable compliance. The consequence of ill discipline can be far reaching and unanticipated.

That is not to say that the security process was beyond improvement. In an effective process-oriented organisation, operating the process would be only one part of the security guard’s job. Part of the bargain for agreeing to the boring/ mindful diligent operation of the process is that part of work time is spent improving the process. That is something done offline, with colleagues, with the input of other parts of the organisation and with recognition of all the data including the voice of the customer.

Had he exercised the “common sense” Dawkins demanded, the security guard would have risked disciplinary action by his employers for serious misconduct. To some people, threats of sanctions appear at odds with engendering trust in an organisation’s process design and decision making. However, when we tell operators that something is important then fail to sanction others who ignore the process, we undermine the basis of the bond of trust with those that accepted our word and complied. Trust in the bureaucracy and sanctions for non-compliance are complementary elements of fostering process discipline. Both are essential.

Trust in bureaucracy I – the Milgram experiments

I have recently been reading Gina Perry’s book Behind the Shock Machine which analyses, criticises and re-assesses the “obedience” experiments of psychologist Stanley Milgram performed in the early 1960s. For the uninitiated there is a brief description of the experiments on Dr Perry’s website. You can find a video of the experiments here.

The experiments have often been cited as evidence for a constitutional human bias towards compliance in the face of authority. From that interpretation has grown a doctrine that the atrocities of war and of despotism are enabled by the common man’s (sic) unresistsing obedience to even a nominal superior, and further that inherent cruelty is eager to express itself under the pretext of an order.

Perry mounts a detailed challenge to the simplicity of that view. In particular, she reveals how Milgram piloted his experiments and fine tuned them so that they would produce the most signal obedient behaviour. The experiments took place within the context of academic research. The experimenter did everything to hold himself out as the representative of an overwhelmingly persuasive body of scientific knowledge. At every stage the experimenter reassured the subject and urged them to proceed. Given this real pressure applied to the experimental subjects, even a 65% compliance rate was hardly dramatic. Most interestingly, the actual reaction of the subjects to their experience was complex and ambiguous. It was far from the conventional view of the cathartic release of supressed violence facilitated by a directive from a figure with a superficial authority. Erich Fromm made some similar points about the experiments in his 1973 book The Anatomy of Human Destructiveness.

What interests me about the whole affair is its relevance to an issue which I have raised before on this blog: trust in bureaucracy. Max Weber was one of the first sociologists to describe how modern societies and organisations rely on a bureaucracy, an administrative policy-making group, to maintain the operation of complex dynamic systems. Studies of engineering and science as bureaucratic professions include Diane Vaughan’s The Challenger Launch Decision.

The majority of Milgram’s subjects certainly trusted the bureaucracy represented by the experimenter, even in the face of their own fears that they were doing harm. This is a stark contrast to some failures of such trust that I have blogged about here. By their mistrust, the cyclist on the railway crossing and the parents who rejected the MMR vaccination placed themselves and others in genuine peril. These were people who had, as far as I have been able to discover, no compelling evidence that the engineers who designed the railway crossing or the scientists who had tested the MMR vaccine might act against their best interests.

So we have a paradox. The majority of Milgram’s subjects ignored their own compelling fears and trusted authority. The cyclist and the parents recklessly ignored or actively mistrusted authority without a well developed alternative world view. Whatever our discomfort with Milgram’s demonstrations of obedience we feel no happier with the cyclist’s and parents’ disobedience. Prof Jerry M Burger partially repeated Milgram’s experiments in 2007. He is quoted by Perry as saying:

It’s not as clear cut as it seems from the outside. When you’re in that situation, wondering, should I continue or should I not, there are reasons to do both. What you do have is an expert in the room who knows all about this study and presumably has been through this many times before with many participants, and he’s telling you, there’s nothing wrong. The reasonable, rational thing to do is to listen to the guy who’s the expert when you’re not sure what to do.

Organisations depend on a workforce aligned around trust in that organisation’s policy and decision making machinery. Even in the least hierarchical of organisations, not everybody gets involved in every decision. Whether it’s the decision of a co-worker with an exotic expertise or the policy of a superior in the hierarchy, compliance and process discipline will succeed or fail on the basis of trust.

The “trust” that Milgram’s subjects showed towards the experimenter was manufactured and Perry discusses how close the experiment ran to acceptable ethical standards.

Organisations cannot rely on such manufactured “trust”. Breakdown of trust among employees is a major enterprise risk for most organisations. The trust of customers is essential to reputation. A key question in all decision making is whether the outcome will foster trust or destroy it.

The cyclist on the railway crossing – a total failure of risk perception

This is a shocking video. It shows a cyclist wholly disregarding warnings and safety barriers at a railway crossing in the UK. She evaded death, and the possible derailment of the train, by the thinnest of margins imaginable.

In my mind this raises fundamental questions, not only about risk perception, but also about how we can expect individuals to behave in systems not of their own designing. Such systems, of course, include organisations.

I was always intrigued by John Adams’ anthropological taxonomy of attitudes to risk (taken from his 1995 book Risk).

AdamsTaxonomy1

Adams identifies four attitudes to risk found at large. Each is entirely self-consistent within its own terms. The egalitarian believes that human and natural systems inhabit a precarious equilibrium. Any departure from the sensitive balance will propel the system towards catastrophe. However, the individualist believes the converse, that systems are in general self-correcting. Any disturbance away from repose will be self-limiting and the system will adjust itself back to equilibrium. The hierarchist agrees with the individualist up to a point but only so long as any disturbance remains within scientifically drawn limits. Outside that lies catastrophe. The fatalist believes that outcomes are inherently uncontrollable and indifferent to individual ambition. Worrying about outcomes is not the right criterion for deciding behaviour.

Without an opportunity to interview the cyclist it is difficult to analyse what she was up to. Even then, I think that it would be difficult for her recollection to escape distortion by some post hoc and post-traumatic rationalisation. I think Adams provides some key insights but there is a whole ecology of thoughts that might be interacting here.

Was the cyclist a fatalist resigned to the belief that no matter how she behaved on the road injury, should it come, would be capricious and arbitrary? Time and chance happeneth to them all.

Was she an individualist confident that the crossing had been designed with her safety assured and that no mindfulness on her part was essential to its effectiveness? That would be consistent with Adams’ theory of risk homeostasis. Whenever a process is made safer on our behalf, we have a tendency to increase our own risk-taking so that the overall risk is the same as before. Adams cites the example of seatbelts in motor cars leading to more aggressive driving.

Did the cyclist perceive any risk at all? Wagenaar and Groeneweg (International Journal of Man-Machine Studies 1987 27 587) reviewed something like 100 shipping accidents and came to the conclusion that:

Accidents do not occur because people gamble and lose, they occur because people do not believe that the accident that is about to occur is at all possible.

Why did the cyclist not trust that the bells, flashing lights and barriers had been provided for her own safety by people who had thought about this a lot? The key word here is “trust” and I have blogged about that elsewhere. I feel that there is an emerging theme of trust in bureaucracy. Engineers are not used to mistrust, other than from accountants. I fear that we sometimes assume too easily that anti-establishment instincts are constrained by the instinct for self preservation.

However we analyse it, the cyclist suffered from a near fatal failure of imagination. Imagination is central to risk management, the richer the spectrum of futures anticipated, the more effectively risk management can be designed into a business system. To the extent that our imagination is limited, we are hostage to our agility in responding to signals in the data. That is what the cyclist discovered when she belatedly spotted the train.

Economist G L S Shackle made this point repeatedly, especially in his last book Imagination and the Nature of Choice (1979). Risk management is about getting better at imagining future scenarios but still being able to spot when an unanticipated scenario has emerged, and being excellent at responding efficiently and timeously. That is the big picture of risk identification and risk awareness.

That then leads to the question of how we manage the risks we can see. A fundamental question for any organisation is what sort of risk takers inhabit their ranks? Risk taking is integral to pursuing an enterprise. Each organisation has its own risk profile. It is critical that individual decision makers are aligned to that. Some will have an instinctive affinity for the corporate philosophy. Others can be aligned through regulation, training and leadership. Some others will not respond to guidance. It is the latter category who must only be placed in positions where the organisation knows that it can benefit from their personal risk appetite.

If you think this an isolated incident and that the cyclist doesn’t work for you, you can see more railway crossing incidents here.

Trust in data – I

I was listening to the BBC’s election coverage on 2 May (2013) when Nick Robinson announced that UKIP supporters were five times more likely than other voters to believe that the MMR vaccine was dangerous.

I had a search on the web. The following graphic had appeared on Mike Smithson’s PoliticalBetting blog on 21 April 2013.

MMR plot

It’s not an attractive bar chart. The bars are different colours. There is a “mean” bar that tends to make the variation look less than it is and makes the UKIP bar (next to it) look more extreme. I was, however, intrigued so I had a look for the original data which had come from a YouGov survey of 1765 respondents. You can find the data here.

Here is a summary of the salient points of the data from the YouGov website in a table which I think is less distracting than the graphic.

Voting   intention Con. Lab. Lib. Dem. UKIP
No. Of   respondents 417 518 142 212
% % % %
MMR safe 99 85 84 72
MMR unsafe 1 3 12 28
Don’t know 0 12 3 0

My first question was: Where had Nick Robinson and Mike Smithson got their numbers from? It is possible that there was another survey I have not found. It is also possible that I am being thick. In any event, the YouGov data raises some interesting questions. This is an exploratory date analysis exercise. We are looking for interesting theories. I don’t think there is any doubt that there is a signal in this data. How do we interpret it? There does look to be some relationship between voting intention and attitude to public safety data.

Should anyone be tempted to sneer at people with political views other than their own, it is worth remembering that it is unlikely that anyone surveyed had scrutinised any of the published scientific research on the topic. All will have digested it, most probably at third hand, through the press, internet, or cooler moment. They may not have any clear idea of the provenance of the assurances as to the vaccination’s safety. They may not have clearly identified issues as to whether what they had absorbed was a purportedly independent scientific study or a governmental policy statement that sought to rely on the science. I suspect that most of my readers have given it no more thought.

The mental process behind the answers probably wouldn’t withstand much analysis. This would be part of Kahneman’s System 1 thinking. However, the question of how such heuristics become established is an interesting one. I suspect there is a factor here that can be labelled “trust in data”.

Trust in data is an issue we all encounter, in business and in life. How do we know when we can trust data?

A starting point for many in this debate is the often cited observation of Brian Joiner that, when presented with a numerical target, a manager has three options: Manage the system so as to achieve the target, distort the system so the target is achieved but at the cost of performance elsewhere (possibly not on the dashboard), or simply distort the data. This, no doubt true, observation is then cited in support of the general proposition that management by numerical target is at best ineffective and at worst counter productive. John Seddon is a particular advocate of the view that, whatever benefits may flow from management by target (and they are seldom championed with any great energy), they are outweighed by the inevitable corruption of the organisation’s data generation and reporting.

It is an unhappy view. One immediate objection is that the broader system cannot operate without targets. Unless the machine part’s diameter is between 49.99 and 50.01 mm it will not fit. Unless chlorine concentrations are below the safe limit, swimmers risk being poisoned. Unless demand for working capital is cut by 10% we will face the consequences of insolvency. Advocates of the target free world respond that those matters can be characterised as the legitimate voice of the customer/ business. It is only arbitrary targets that are corrosive.

I am not persuaded that the legitimate/ arbitrary distinction is a real one, nor how the distinction motivates two different kinds of behaviour. I will blog more about this later. Leadership’s urgent task is to ensure that all managers have the tools to measure present reality and work to improve it. Without knowing how much improvement is essential a manager cannot make rational decisions about the allocation of resources. In that context, when the correct management control is exercised, improving the system is easier than cheating. I shall blog about goal deployment and Hoshin Kanri on another occasion.

Trust in data is just a factor of trust in general. In his popular book on evolutionary psychology and economics, The Origins of Virtue, Matt Ridley observes the following.

Trust is as vital a form of social capital as money is a form of actual capital. … Trust, like money, can be lent (‘I trust you because I trust the person who told me he trusts you’), and can be risked, hoarded or squandered. It pays dividends in the currency of more trust.

Within an organisation, trust in data is something for everybody to work on building collaboratively under diligent leadership. As to the public sphere, trust in data is related to trust in politicians and that may be a bigger problem to solve. It is also a salutary warning as to what happens when there is a failure of trust in leadership.