The elephant in the room – proving a negative in litigation

File:African Bush Elephant.jpgThe apocryphal story goes around that Ludwig Wittgenstein challenged fellow philosopher Bertrand Russell to prove that there wasn’t an elephant in the room in which they were sharing afternoon tea.

It’s a fairly self-indulgent challenge between intellectuals but it does highlight a feeling we’ve all had. It’s easy to prove that there’s an elephant there, if there is, by pointing to it. Proving that something isn’t there is more problematic. You have to point to everywhere that it isn’t.

Former Shell Legal Director Peter Rees QC recently observed that litigation and compliance are the most significant risks currently facing corporations. In litigation, defendants sometimes find themselves in the position of having to prove that something didn’t happen against an allegation from a claimant that it did. That always puts the defendant at a disadvantage. The claimant will give evidence of what they say happened. What evidence can the defendant give?

This asymmetry will be all the more keenly felt in England and Wales following the recent Jackson reforms to personal injury litigation. The former control mechanisms have been swept away and the Ministry of Justice believes that this is likely to result in more claims against businesses. Claims that would have previously been screened out will now be run because of the economics of the restructured claims environment. All my instructing solicitors are now confirming this to me.

Ironically, the instrument of this upwards pressure on claims risk is Qualified One-way Cost Shifting (QOCS). QOCS also pretty much prevents a business who successfully defends a claim from recovering legal costs against the unsuccessful claimant. In any event, legal costs are likely to be dwarfed by irrecoverable costs to the business from having key people distracted from the value-creating process.

All that means that businesses need to get better at stifling spurious claims at the outset. The twin keys to that are process discipline and record keeping.

It always saddens me when I have to advise businesses to settle doubtful claims simply because their record keeping was not capable of setting them up to rebut an allegation.

There are three principal elements to staying ahead of the game:

  • Ensuring that risk assessment identifies where record keeping would support the organisation’s narrative of prudent operation and regulatory compliance;
  • Implementing a system of process surveillance to foster process discipline; and
  • Building a document retention system that ensures that such a record can be interrogated to provide a compelling picture of conscientious management and risk mitigation.

A well designed document retention system is a key part of managing risks.

I find it instructive and encouraging that in Divya & Ors v Toyo Tire and Rubber Co. Ltd & Ors, Toyo Tire managed to persuade the court that it was very unlikely that a road traffic accident could have been caused by a manufacturing fault in their tyre.

I do not advocate rigorous process management as a net cost motivated by defensive operations aimed at providing a patina of compliance. That is not what succeeded at Toyo Tire. Rigorous process management reduces waste, improves socially recognised customer reputation and streamlines cashflow. Its potency in litigation is a bonus.

Deconstructing Deming I – Constancy of Purpose

File:W. Edwards Deming.gifMy 20 December 2013 post on W Edwards Deming attracted quite a lot of interest. The response inspired me to take a detailed look at his ideas 20 years on, starting with his 14 Points.

Deming’s 14 Points for Management are his best remembered takeaway. Deming put them forward as representative of the principles adopted by Japanese industry in its rise from 1950 to the prestigious position it held in manufacturing at the beginning of the 1980s.

Point 1. Create constancy of purpose toward improvement of product and service, with the aim to become competitive and to stay in business, and to provide jobs.

In his 1983 elaboration of the point in Out of the Crisis, Deming explained what he meant. Managing a business was not only about exploiting existing products and processes to generate a stream of profits. It was also about re-inventing those products and processes, innovating and developing to retain and capture market. Deming was fearful that management focused too much on short term profits from existing products and services, and that an effort of leadership was needed to reorient attention and resource towards design and development. The “improvement” that Deming was referring to is that through design and redesign, not simply the incremental improvement of existing value streams. Critically, Deming saw design and redesign as a key business process that should itself be the target of incremental continual improvement. Design and re-design was not an ad hoc project initiated by some rare, once in a generation sea-change in the market or motivated by a startling idea from an employee. It was a routine and “constant” part of the business of a business.

Some of Deming’s latter day followers sometimes deprecate the radical redesign of processes in approaches such as Business Process Re-engineering, and instead promote the incremental improvement of existing processes by those who work in them. That is exactly the approach that Deming was warning against in Point 1.

It is worth recalling the economic and geographic climate within which Deming put forwards this principle. During the early 1980s, the US and Western Europe suffered a significant recession, their populations beset with the dual evils of unemployment and inflation. The economic insecurities were aggravated by social unrest in the West and the intensification of the Cold War.

In 1980 Robert Hayes and William Abernathy, academics at the Harvard Business School, attacked US management in their seminal paper Managing our Way to Economic Decline. They found that fewer and fewer executives were from engineering and operations backgrounds, but increasingly from law and finance. Such managers had little understanding, they said, of the mechanics of the businesses they ran or the markets in which they competed. That in turn led executives to tend to pursue short term profits from existing value streams. These were easy to measure and predict on the visible accounts. However, managers were allegedly ill placed to make informed decisions as to the new products or services that would determine future profits. The uncertainties of such decisions were unknown and unknowable other than to a discipline specialist. Franklin Fisher characterised matters in this way (1989, “Games economists play: a noncooperative view” Rand Journal of Economics 20, 113):

Bright young theorists tend to think of every problem in game theoretic terms, including problems that are easier to think of in other forms.

This all appeared in contrast to Japanese manufacturing industry and in particular Toyota. By 1980, Japanese manufactured goods had come increasingly to dominate global markets. Japanese success was perceived as the (Lawrence Freedman, 2013, Strategy: A History, p531):

… triumph of a focussed, patient, coherent, consensual culture, a reflection of dedicated operational efficiency, or else a combination of the two.

Certainly in my own automotive industry days, my employer had come to see its most successful products literally as commodities. They belatedly realised that, while they had been treating them as a mere income stream, admittedly spent largely on unsuccessful attempts to develop radical new products, Japanese competitors had been filing dozens of patents each year making incremental improvement to design and function, and threatening the company’s core revenues.

But did Deming choose the right target and, in any event, does the exhortation remain cogent? It feels in 2014 as though we all have much more appetite for innovation, invention and product design than we had in 1983. Blogs extol virtues of and strategies for entrepreneurship. Slogans proliferate such as “Fail early, fail fast, fail often”. It is not clear from this web activity whether innovation is being backed by capital. However, the very rate of technological change in society suggests that capital is backing novelty rather than simply engaging in the rent seeking that Hayes and Abernathy feared.

In 2007 Hayes reflected on his 1980 work. He felt that his views had become mainstream and uncontroversial, and been largely adopted in corporations. However, information and globalisation had created a new set of essentials to be addressed and to become part of the general competencies of a manager (“Managing Our Way… A Retrospective by Robert H. Hayes” Harvard Business Review, July-August 2007, 138-149).

I remain unpersuaded that there has been such a broadening in the skill set of managers. The game theorists, data scientists and economists seem to remain in the ascendancy. Whatever change of mind in attitudes to design has taken place, it has happened against a background where CEOs still hop industries. There are other explanations for lack of innovation. Daniel Ellsberg’s principle of ambiguity aversion predicts that quantifiable risks that are apparent from visible accounts will tend to be preferred over ambiguous returns on future inventions, even by subject matter experts. Prevailing comparative advantages may point some corporations away from research. Further, capital flows were particularly difficult in the early 1980s recession. Liberalisation of markets and the rolling back of the state in the 1980s led to more efficient allocation of capital and coincided with a palpable increase in the volume, variety and quality of available consumer goods in the West. There is no guarantee against a failure of strategy. My automotive employer hadn’t missed the importance of new product development but they made a strategic mistake in allocating resources.

Further, psychologist Daniel Kahneman found evidence for a balancing undue optimism about future business, referring to “entrepreneurial delusions” and “competition neglect”, two aspects of What you see is all there is. (Thinking, Fast and Slow, 2011, Chapter 24).

In Notes from Toyota-Land: An American Engineer in Japan (2005), Robert Perrucci and Darius Mehri criticised Toyota’s approach to business. Ironically, Mehri contended that Toyota performed weakly in innovation and encouraged narrow professional skills. It turned out that Japanese management didn’t prevent a collapse in the economy lasting from 1991 to the present. Toyota itself went on to suffer serious reputational damage (Robert E. Cole “What Really Happened to Toyota?” MIT Sloan Management Review, Summer 2011)

So Deming and others were right to draw attention to Western under performance in product design. However, I suspect that the adoption of a more design led culture is largely due to macroeconomic forces rather than exhortations.

There is still much to learn, however, in balancing the opportunities apparent from visible accounts with the uncertainties of imagined future income streams.

I think there remains an important message, perhaps a Point 1 for the 21st Century.

There’s a problem bigger than the one you’re working on. Don’t ignore it!

Richard Dawkins champions intelligent design (for business processes)

Richard Dawkins has recently had a couple of bad customer experiences. In each he was confronted with a system that seemed to him indifferent to his customer feedback. I sympathise with him on one matter but not the other. The two incidents do, in my mind, elucidate some important features of process discipline.

In the first, Dawkins spent a frustrating spell ordering a statement from his bank over the internet. He wanted to tell the bank about his experience and offer some suggestions for improvement, but he couldn’t find any means of channelling and communicating his feedback.

Embedding a business process in software will impose a rigid discipline on its operation. However, process discipline is not the same thing as process petrification. The design assumptions of any process include, or should include, the predicted range and variety of situations that the process is anticipated to encounter. We know that the bounded rationality of the designers will blind them to some of the situations that the process will subsequently confront in real world operation. There is no shame in that but the necessary adjunct is that, while the process is operated diligently as designed, data is accumulated on its performance and, in particular, on the customer’s experience. Once an economically opportune moment arrives (I have glossed over quote a bit there) the data can be reviewed, design assumptions challenged and redesign evaluated. Following redesign the process then embarks on another period of boring operation. The “boring” bit is essential to success. Perhaps I should say “mindful” rather than “boring” though I fear that does not really work with software.

Dawkins’ bank have missed an opportunity to listen to the voice of the customer. That weakens their competitive position. Ignorance cannot promote competitiveness. Any organisation that is not continually improving every process for planning, production and service (pace W Edwards Deming) faces the inevitable fact that its competitors will ultimately make such products and services obsolete. As Dawkins himself would appreciate, survival is not compulsory.

Dawkins’ second complaint was that security guards at a UK airport would not allow him to take a small jar of honey onto his flight because of a prohibition on liquids in the passenger cabin. Dawkins felt that the security guard should have displayed “common sense” and allowed it on board contrary to the black letter of the regulations. Dawkins protests against “rule-happy officials” and “bureaucratically imposed vexation”. Dawkins displays another failure of trust in bureaucracy. He simply would not believe that other people had studied the matter and come to a settled conclusion to protect his safety. It can hardly have been for the airport’s convenience. Dawkins was more persuaded by something he had read on the internet. He fell into the trap of thinking that What you see is all there is. I fear that Dawkins betrays his affinities with the cyclist on the railway crossing.

When we give somebody a process to operate we legitimately expect them to do so diligently and with self discipline. The risk of an operator departing from, adjusting or amending a process on the basis of novel local information is that, within the scope of the resources they have for taking that decision, there is no way of reliably incorporating the totality of assumptions and data on which the process design was predicated. Even were all the data available, when Dawkins talks of “common sense” he was demanding what Daniel Kahneman called System 2 thinking. Whenever we demand System 2 thinking ex tempore we are more likely to get System 1 and it is unlikely to perform effectively. The rationality of an individual operator in that moment is almost certainly more tightly bounded than that of the process designers.

In this particular case, any susceptibility of a security guard to depart from process would be exactly the behaviour that a terrorist might seek to exploit once aware of it.

Further, departures from process will have effects on the organisational system, upstream, downstream and collateral. Those related processes themselves rely on the operator’s predictable compliance. The consequence of ill discipline can be far reaching and unanticipated.

That is not to say that the security process was beyond improvement. In an effective process-oriented organisation, operating the process would be only one part of the security guard’s job. Part of the bargain for agreeing to the boring/ mindful diligent operation of the process is that part of work time is spent improving the process. That is something done offline, with colleagues, with the input of other parts of the organisation and with recognition of all the data including the voice of the customer.

Had he exercised the “common sense” Dawkins demanded, the security guard would have risked disciplinary action by his employers for serious misconduct. To some people, threats of sanctions appear at odds with engendering trust in an organisation’s process design and decision making. However, when we tell operators that something is important then fail to sanction others who ignore the process, we undermine the basis of the bond of trust with those that accepted our word and complied. Trust in the bureaucracy and sanctions for non-compliance are complementary elements of fostering process discipline. Both are essential.

Trust in bureaucracy I – the Milgram experiments

I have recently been reading Gina Perry’s book Behind the Shock Machine which analyses, criticises and re-assesses the “obedience” experiments of psychologist Stanley Milgram performed in the early 1960s. For the uninitiated there is a brief description of the experiments on Dr Perry’s website. You can find a video of the experiments here.

The experiments have often been cited as evidence for a constitutional human bias towards compliance in the face of authority. From that interpretation has grown a doctrine that the atrocities of war and of despotism are enabled by the common man’s (sic) unresistsing obedience to even a nominal superior, and further that inherent cruelty is eager to express itself under the pretext of an order.

Perry mounts a detailed challenge to the simplicity of that view. In particular, she reveals how Milgram piloted his experiments and fine tuned them so that they would produce the most signal obedient behaviour. The experiments took place within the context of academic research. The experimenter did everything to hold himself out as the representative of an overwhelmingly persuasive body of scientific knowledge. At every stage the experimenter reassured the subject and urged them to proceed. Given this real pressure applied to the experimental subjects, even a 65% compliance rate was hardly dramatic. Most interestingly, the actual reaction of the subjects to their experience was complex and ambiguous. It was far from the conventional view of the cathartic release of supressed violence facilitated by a directive from a figure with a superficial authority. Erich Fromm made some similar points about the experiments in his 1973 book The Anatomy of Human Destructiveness.

What interests me about the whole affair is its relevance to an issue which I have raised before on this blog: trust in bureaucracy. Max Weber was one of the first sociologists to describe how modern societies and organisations rely on a bureaucracy, an administrative policy-making group, to maintain the operation of complex dynamic systems. Studies of engineering and science as bureaucratic professions include Diane Vaughan’s The Challenger Launch Decision.

The majority of Milgram’s subjects certainly trusted the bureaucracy represented by the experimenter, even in the face of their own fears that they were doing harm. This is a stark contrast to some failures of such trust that I have blogged about here. By their mistrust, the cyclist on the railway crossing and the parents who rejected the MMR vaccination placed themselves and others in genuine peril. These were people who had, as far as I have been able to discover, no compelling evidence that the engineers who designed the railway crossing or the scientists who had tested the MMR vaccine might act against their best interests.

So we have a paradox. The majority of Milgram’s subjects ignored their own compelling fears and trusted authority. The cyclist and the parents recklessly ignored or actively mistrusted authority without a well developed alternative world view. Whatever our discomfort with Milgram’s demonstrations of obedience we feel no happier with the cyclist’s and parents’ disobedience. Prof Jerry M Burger partially repeated Milgram’s experiments in 2007. He is quoted by Perry as saying:

It’s not as clear cut as it seems from the outside. When you’re in that situation, wondering, should I continue or should I not, there are reasons to do both. What you do have is an expert in the room who knows all about this study and presumably has been through this many times before with many participants, and he’s telling you, there’s nothing wrong. The reasonable, rational thing to do is to listen to the guy who’s the expert when you’re not sure what to do.

Organisations depend on a workforce aligned around trust in that organisation’s policy and decision making machinery. Even in the least hierarchical of organisations, not everybody gets involved in every decision. Whether it’s the decision of a co-worker with an exotic expertise or the policy of a superior in the hierarchy, compliance and process discipline will succeed or fail on the basis of trust.

The “trust” that Milgram’s subjects showed towards the experimenter was manufactured and Perry discusses how close the experiment ran to acceptable ethical standards.

Organisations cannot rely on such manufactured “trust”. Breakdown of trust among employees is a major enterprise risk for most organisations. The trust of customers is essential to reputation. A key question in all decision making is whether the outcome will foster trust or destroy it.

Managing a railway on historical data is like …

I was recently looking on the web for any news on the Galicia rail crash. I didn’t find anything current but came across this old item from The Guardian (London). It mentioned in passing that consortia tendering for a new high speed railway in Brazil were excluded if they had been involved in the operation of a high speed line that had had an accident in the previous five years.

Well, I don’t think that there is necessarily anything wrong with that in itself. But it is important to remember that a rail accident is not necessarily a Signal (sic). Rail accidents worldwide are often a manifestation of what W Edwards Deming called A stable system of trouble. In other words, a system that features only Noise but which cannot deliver the desired performance. An accident free record of five years is a fine thing but there is nothing about a stable system of trouble that says it can’t have long incident free periods.

In order to turn that incident free five years into evidence about future likely safety performance we also need hard evidence, statistical and qualitative, about the stability and predictability of the rail operator’s processes. Procurement managers are often much worse at looking for, and at, this sort of data. In highly sophisticated industries such as automotive it is routine to demand capability data and evidence of process surveillance from a potential supplier. Without that, past performance is of no value whatever in predicting future results.

Rearview

The cyclist on the railway crossing – a total failure of risk perception

This is a shocking video. It shows a cyclist wholly disregarding warnings and safety barriers at a railway crossing in the UK. She evaded death, and the possible derailment of the train, by the thinnest of margins imaginable.

In my mind this raises fundamental questions, not only about risk perception, but also about how we can expect individuals to behave in systems not of their own designing. Such systems, of course, include organisations.

I was always intrigued by John Adams’ anthropological taxonomy of attitudes to risk (taken from his 1995 book Risk).

AdamsTaxonomy1

Adams identifies four attitudes to risk found at large. Each is entirely self-consistent within its own terms. The egalitarian believes that human and natural systems inhabit a precarious equilibrium. Any departure from the sensitive balance will propel the system towards catastrophe. However, the individualist believes the converse, that systems are in general self-correcting. Any disturbance away from repose will be self-limiting and the system will adjust itself back to equilibrium. The hierarchist agrees with the individualist up to a point but only so long as any disturbance remains within scientifically drawn limits. Outside that lies catastrophe. The fatalist believes that outcomes are inherently uncontrollable and indifferent to individual ambition. Worrying about outcomes is not the right criterion for deciding behaviour.

Without an opportunity to interview the cyclist it is difficult to analyse what she was up to. Even then, I think that it would be difficult for her recollection to escape distortion by some post hoc and post-traumatic rationalisation. I think Adams provides some key insights but there is a whole ecology of thoughts that might be interacting here.

Was the cyclist a fatalist resigned to the belief that no matter how she behaved on the road injury, should it come, would be capricious and arbitrary? Time and chance happeneth to them all.

Was she an individualist confident that the crossing had been designed with her safety assured and that no mindfulness on her part was essential to its effectiveness? That would be consistent with Adams’ theory of risk homeostasis. Whenever a process is made safer on our behalf, we have a tendency to increase our own risk-taking so that the overall risk is the same as before. Adams cites the example of seatbelts in motor cars leading to more aggressive driving.

Did the cyclist perceive any risk at all? Wagenaar and Groeneweg (International Journal of Man-Machine Studies 1987 27 587) reviewed something like 100 shipping accidents and came to the conclusion that:

Accidents do not occur because people gamble and lose, they occur because people do not believe that the accident that is about to occur is at all possible.

Why did the cyclist not trust that the bells, flashing lights and barriers had been provided for her own safety by people who had thought about this a lot? The key word here is “trust” and I have blogged about that elsewhere. I feel that there is an emerging theme of trust in bureaucracy. Engineers are not used to mistrust, other than from accountants. I fear that we sometimes assume too easily that anti-establishment instincts are constrained by the instinct for self preservation.

However we analyse it, the cyclist suffered from a near fatal failure of imagination. Imagination is central to risk management, the richer the spectrum of futures anticipated, the more effectively risk management can be designed into a business system. To the extent that our imagination is limited, we are hostage to our agility in responding to signals in the data. That is what the cyclist discovered when she belatedly spotted the train.

Economist G L S Shackle made this point repeatedly, especially in his last book Imagination and the Nature of Choice (1979). Risk management is about getting better at imagining future scenarios but still being able to spot when an unanticipated scenario has emerged, and being excellent at responding efficiently and timeously. That is the big picture of risk identification and risk awareness.

That then leads to the question of how we manage the risks we can see. A fundamental question for any organisation is what sort of risk takers inhabit their ranks? Risk taking is integral to pursuing an enterprise. Each organisation has its own risk profile. It is critical that individual decision makers are aligned to that. Some will have an instinctive affinity for the corporate philosophy. Others can be aligned through regulation, training and leadership. Some others will not respond to guidance. It is the latter category who must only be placed in positions where the organisation knows that it can benefit from their personal risk appetite.

If you think this an isolated incident and that the cyclist doesn’t work for you, you can see more railway crossing incidents here.

Music is silver but …

The other day I came across a report on the BBC website that non-expert listeners could pick out winners of piano competitions more reliably when presented with silent performance videos than when exposed to sound alone. In the latter case they performed no better than chance.

The report was based on the work of Chia-Jung Tsay at University College London, in a paper entitled Sight over sound in the judgment of music performance.

The news report immediately leads us to suspect that the expert evaluating a musical performance is not in fact analysing and weighing auditory complexity and aesthetics but instead falling under the subliminal influence of the proxy data of the artist’s demeanour and theatrics.

That is perhaps unsurprising. We want to believe, as does the expert critic, that performance evaluation is a reflective, analytical and holistic enterprise, demanding decades of exposure to subtle shades of interpretation and developing skills of discrimination by engagement with the ascendant generation of experts. This is what Daniel Kahneman calls a System 2 task. However, a wealth of psychological study shows only too well that System 2 is easily fatigued and distracted. When we believe we are thinking in System 2, we are all too often loafing in System 1 and using simplistic learned heuristics as a substitute. It is easy to imagine that the visual proxy data might be such a heuristic, a ready reckoner that provides a plausible result in a wide variety of commonly encountered situations.

These behaviours are difficult to identify, even for the most mindful individual. Kahneman notes:

… all of us live much of our lives guided by the impressions of System 1 – and we do not know the source of these impressions. How do you know that a statement is true? If it is strongly linked by logic or association to other beliefs or preferences you hold, or comes from a source you trust and like, you will feel a sense of cognitive ease. The trouble is that there may be other causes for your feeling of ease … and you have no simple way of tracing your feelings to their source”

Thinking, Fast and Slow, p64

The problem is that what Kahneman describes is exactly what I was doing in finding my biases confirmed by this press report. I have had a superficial look at the statistics in this study and I am now less persuaded than when I read the press item. I shall maybe blog about this later and the difficulties I had in interpreting the analysis. Really, this is quite a tentative and suggestive study on a very limited frame. I would certainly like to see more inter-laboratory studies in psychology. The study is open to multiple interpretations and any individual will probably have difficulty making an exhaustive list.  There is always a danger of falling into the trap of What You See Is All There Is (WYSIATI).

That notwithstanding, even anecdotally, the story is another reminder of an important lesson of process management that, even though what we have been doing has worked in the past, we may not understand what it is that has been working.