Imagine …

Ben Bernanke official portrait.jpgNo, not John Lennon’s dreary nursery rhyme for hippies.

In his memoir of the 2007-2008 banking crisis, The Courage to ActBen Benanke wrote about his surprise when the crisis materialised.

We saw, albeit often imperfectly, most of the pieces of the puzzle. But we failed to understand – “failed to imagine” might be a better phrase – how those pieces would fit together to produce a financial crisis that compared to, and arguably surpassed, the financial crisis that ushered in the Great Depression.

That captures the three essentials of any attempt to foresee a complex future.

  • The pieces
  • The fit
  • Imagination

In any well managed organisation, “the pieces” consist of the established Key Performance Indicators (KPIs) and leading measures. Diligent and rigorous criticism of historical data using process behaviour charts allows departures from stability to be identified timeously. A robust and disciplined system of management and escalation enables an agile response when special causes arise.

Of course, “the fit” demands a broader view of the data, recognising interactions between factors and the possibility of non-simple global responses remote from a locally well behaved response surface. As the old adage goes, “Fit locally. Think globally.” This is where the Cardinal Newman principle kicks in.

“The pieces” and “the fit”, taken at their highest, yield a map of historical events with some limited prediction as to how key measures will behave in the future. Yet it is common experience that novel factors persistently invade. The “bow wave” of such events will not fit a recognised pattern where there will be a ready consensus as to meaning, mechanism and action. These are the situations where managers are surprised by rapidly emerging events, only to protest, “We never imagined …”.

Nassim Taleb’s analysis of the financial crisis hinged on such surprises and took him back to the work of British economist G L S Shackle. Shackle had emphasised the importance of imagination in economics. Put at its most basic, any attempt to assign probabilities to future events depends upon the starting point of listing the alternatives that might occur. Statisticians call it the sample space. If we don’t imagine some specific future we won’t bother thinking about the probability that it might come to be. Imagination is crucial to economics but it turns out to be much more pervasive as an engine of improvement that at first is obvious.

Imagination and creativity

Frank Whittle had to imagine the jet engine before he could bring it into being. Alan Turing had to imagine the computer. They were both fortunate in that they were able to test their imagination by construction. It was all realised in a comparatively short period of time. Whittle’s and Turing’s respective imaginations were empirically verified.

What is now proved was once but imagined.

William Blake

Not everyone has had the privilege of seeing their imagination condense into reality within their lifetime. In 1946, Sir George Paget Thomson and Moses Blackman imagined a plentiful source of inexpensive civilian power from nuclear fusion. As of writing, prospects of a successful demonstration seem remote. Frustratingly, as far as I can see, the evidence still refuses to tip the balance as to whether future success is likely or that failure is inevitable.

Something as illusive as imagination can have a testable factual content. As we know, not all tests are conclusive.

Imagination and analysis

Imagination turns out to be essential to something as prosaic as Root Cause Analysis. And essential in a surprising way. Establishing an operative cause of a past event is an essential task in law and engineering. It entails the search for a counterfactual, not what happened but what might have happened to avoid the  regrettable outcome. That is inevitably an exercise in imagination.

In almost any interesting situation there will be multiple imagined pasts. If there is only one then it is time to worry. Sometimes it is straightforward to put our ideas to the test. This is where the Shewhart cycle comes into its own. In other cases we are in the realms of uncomfortable science. Sometimes empirical testing is frustrated because the trail has gone cold.

The issues of counterfactuals, Root Cause Analysis and causation have been explored by psychologists Daniel Kahneman1 and Ruth Byrne2 among others. Reading their research is a corrective to the optimistic view that Root Cause analysis is some sort of inevitably objective process. It is distorted by all sorts of heuristics and biases. Empirical testing is vital, if only through finding some data with borrowing strength.

Imagine a millennium bug

In 1984, Jerome and Marilyn Murray published Computers in Crisis in which they warned of a significant future risk to global infrastructure in telecommunications, energy, transport, finance, health and other domains. It was exactly those areas where engineers had been enthusiastic to exploit software from the earliest days, often against severe constraints of memory and storage. That had led to the frequent use of just two digits to represent a year, “71” for 1971, say. From the 1970s, software became more commonly embedded in devices of all types. As the year 2000 approached, the Murrays envisioned a scenario where the dawn of 1 January 2000 was heralded by multiple system failures where software registers reset to the year 1900, frustrating functions dependent on timing and forcing devices into a fault mode or a graceless degradation. Still worse, systems may simply malfunction abruptly and without warning, the only sensible signal being when human wellbeing was compromised. And the ruinous character of such a threat would be that failure would be inherently simultaneous and global, with safeguarding systems possibly beset with the same defects as the primary devices. It was easy to imagine a calamity.

Risk matrixYou might like to assess that risk yourself (ex ante) by locating it on the Risk Assessment Matrix to the left. It would be a brave analyst who would categorise it as “Low”, I think. Governments and corporations were impressed and embarked on a massive review of legacy software and embedded systems, estimated to have cost around $300 billion at year 2000 prices. A comprehensive upgrade programme was undertaken by nearly all substantial organisations, public and private.

Then, on 1 January 2000, there was no catastrophe. And that caused consternation. The promoters of the risk were accused of having caused massive expenditure and diversion of resources against a contingency of negligible impact. Computer professionals were accused, in terms, of self-serving scare mongering. There were a number of incidents which will not have been considered minor by the people involved. For example, in a British hospital, tests for Down’s syndrome were corrupted by the bug resulting in contra-indicated abortions and births. However, there was no global catastrophe.

This is the locus classicus of a counterfactual. Forecasters imagined a catastrophe. They persuaded others of their vision and the necessity of vast expenditure in order to avoid it. The preventive measures were implemented at great costs. The Catastrophe did not occur. Ex post, the forecasters were disbelieved. The danger had never been real. Even Cassandra would have sympathised.

Critics argued that there had been a small number of relatively minor incidents that would have been addressed most economically on a “fix on failure” basis. Much of this turns out to be a debate about the much neglected column of the risk assessment headed “Detectability”. Where a failure will inflict immediate pain, it is so much more critical as to management and mitigation than a failure that will present the opportunity for detection and protection in advance of a broader loss. Here, forecasting Detectability was just as important as Probability and Consequences in arriving at an economic strategy for management.

It is the fundamental paradox of risk assessment that, where control measures eliminate a risk, it is not obvious whether the benign outcome was caused by the control or whether the risk assessment was just plain wrong and the risk never existed. Another counterfactual. Again, finding some alternative data with borrowing strength can help though it will ever be difficult to build a narrative appealing to a wide population. There are links to some sources of data on the Wikipedia article about the bug. I will leave it to the reader.

Imagine …

Of course it is possible to find this all too difficult and to adopt the Biblical outlook.

I returned, and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all.

Ecclesiastes 9:11
King James Bible

That is to adopt the outlook of the lady on the level crossing. Risk professionals look for evidence that their approach works.

The other day, I was reading the annual report of the UK Health and Safety Executive (pdf). It shows a steady improvement in the safety of people at work though oddly the report is too coy to say this in terms. The improvement occurs over the period where risk assessment has become ubiquitous in industry. In an individual work activity it will always be difficult to understand whether interventions are being effective. But using the borrowing strength of the overall statistics there is potent evidence that risk assessment works.

References

  1. Kahneman, D & Tversky, A (1979) “The simulation heuristic”, reprinted in Kahneman et al. (1982) Judgment under Uncertainty: Heuristics and Biases, Cambridge, p201
  2. Byrne, R M J (2007) The Rational Imagination: How People Create Alternatives to Reality, MIT Press
Advertisements

The Iron Law at Volkswagen

So Michael Horn, VW’s US CEO has made a “sincere apology” for what went on at VW.

And like so many “sincere apologies” he blamed somebody else. “My understanding is that it was a couple of software engineers who put these in.”

As an old automotive hand I have always been very proud of the industry. I have held it up as a model of efficiency, aesthetic aspiration, ambition, enlightenment and probity. My wife will tell you how many times I have responded to tales of workplace chaos with “It couldn’t happen in a car plant”. Fortunately we don’t own a VW but I still feel betrayed by this. Here’s why.

A known risk

Everybody knew from the infancy of emissions testing, which came along at about the same time as the adoption of engine management systems, the risks of a “cheat device”. It was obvious to all that engineers might be tempted to manoeuvre a recalcitrant engine through a challenging emissions test by writing software so as to detect test conditions and thereon modify performance.

In the better sort of motor company, engineers were left in no doubt that this was forbidden and the issue was heavily policed with code reviews and process surveillance.

This was not something that nobody saw coming, not a blind spot of risk identification.

The Iron Law

I wrote before about the Iron Law of Oligarchy. Decision taking executives in an organisation try not to pass information upwards. That will only result in interference and enquiry. Supervisory boards are well aware of this phenomenon because, during their own rise to the board, they themselves were the senior managers who constituted the oligarchy and who kept all the information to themselves. As I guessed last time I wrote, decisions like this don’t get taken at board level. They are taken out of the line of sight of the board.

Governance

So here we have a known risk. A threat that would likely not be detected in the usual run of line management. And it was of such a magnitude as would inflict hideous ruin on Volkswagen’s value, accrued over decades of hard built customer reputation. Volkswagen, an eminent manufacturer with huge resources, material, human and intellectual. What was the governance function to do?

Borrowing strength again

It would have been simple, actually simple, to secret shop the occasional vehicle and run it through an on-road emissions test. Any surprising discrepancy between the results and the regulatory tests would then have been a signal that the company was at risk and triggered further investigation. An important check on any data integrity is to compare it with cognate data collected by an independent route, data that shares borrowing strength.

Volkswagen’s governance function simply didn’t do the simple thing. Never have so many ISO 31000 manuals been printed in vain. Theirs were the pot odds of a jaywalker.

Knowledge

In the English breach of trust case of Baden, Delvaux and Lecuit v Société Générale [1983] BCLC 325, Mr Justice Peter Gibson identified five levels of knowledge that might implicate somebody in wrongdoing.

  • Actual knowledge.
  • Wilfully shutting one’s eyes to the obvious (Nelsonian knowledge).
  • Wilfully and recklessly failing to make such enquiries as an honest and reasonable man would make.
  • Knowledge of circumstances that would indicate the facts to an honest and reasonable man.
  • Knowledge of circumstances that would put an honest and reasonable man on enquiry.

I wonder where VW would place themselves in that.

How do you sound when you feel sorry?

… is the somewhat barbed rejoinder to an ungracious apology. Let me explain how to be sorry. There are three “R”s.

  • Remorse: Different from the “regret” that you got caught. A genuine internal emotional reaction. The public are good at spotting when emotions are genuine but it is best evidenced by the following two “R”s.
  • Reparation: Trying to undo the damage. VW will not have much choice about this as far as the motorists are concerned but the shareholders may be a different matter. I don’t think Horn’s director’s insurance will go very far.
  • Reform: This is the barycentre of repentance. Can VW now change the way it operates to adopt genuine governance and systematic risk management?

Mr Horn tells us that he has little control over what happens in his company. That is probably true. I trust that he will remember that at his next remuneration review. If there is one.

When they said, “Repent!”, I wonder what they meant.

Leonard Cohen
The Future

First thoughts on VW’s emmissions debacle

It is far too soon to tell exactly what went on at VW, in the wider motor industry, within the respective regulators and within governments. However, the way that the news has come out, and the financial and operational impact that it is likely to have, are enough to encourage all enterprises to revisit their risk management, governance and customer reputation management policies. Corporate scandals are not a new phenomenon, from the collapse of the Medici Bank in 1494, Warren Hastings’ alleged despotism in the British East India Company, down to the FIFA corruption allegations that broke earlier this year. Organisational scandals are as old as organisations. The bigger the organisations get, the bigger the scandals are going to be.

Normal Scandals

In 1984, Scott Perrow published his pessimistic analysis of what he saw as the inevitability of Normal Accidents in complex technologies. I am sure that there is a market for a book entitled Normal Scandals: Living with High-Risk Organisational Structures. But I don’t share Perrow’s pessimism. Life is getting safer. Let’s adopt the spirit of continual improvement to make investment safer too. That’s investment for those of us trying to accumulate a modest portfolio for retirement. Those who aspire to join the super rich will still have to take their chances.

I fully understand that organisations sometimes have to take existential risks to stay in business. The development of Rolls-Royce’s RB211 aero-engine well illustrates what happens when a manufacturer finds itself with proven technologies that are inadequately aligned with the Voice of the Customer. The market will not wait while the business catches up. There is time to develop a response but only if that solution works first time. In the case of Rolls-Royce it didn’t and insolvency followed. However, there was no alternative but to try.

What happened at VW? I just wonder whether the Iron Law of Oligarchy was at work. To imagine that a supervisory board sits around discussing the details of engine management software is naïve. In fact it was the RB211 crisis that condemned such signal failures of management to delegate. Do VW’s woes flow from a decision taken by a middle manager, or a blind eye turned, that escaped an inadequate system of governance? Perhaps a short term patch in anticipation of an ultimate solution?

Cardinal Newman’s contribution to governance theory

John Henry Newman learned about risk management the hard way. Newman was an English Anglican divine who converted to the Catholic Church in 1845. In 1850 Newman became involved in the controversy surrounding Giacinto Achilli, a priest expelled from the Catholic Church for rape and sexual assault but who was making a name from himself in England as a champion of the protestant evangelical cause. Conflict between Catholic and protestant was a significant feature of the nineteenth century English political landscape. Newman was minded to ensure that Achilli’s background was widely known. He took legal advice from counsel James Hope-Scott about the risks of a libel action from Achilli. Hope-Scott was reassuring and Newman published. The publication resulted in Newman’s prosecution and conviction for criminal libel.

Speculation about what legal advice VW have received as to their emissions strategy would be inappropriate. However, I trust that, if they imagined they were externalising any risk thereby, they checked the value of their legal advisors’ professional indemnity insurance.

Newman certainly seems to have learned his lesson and subsequently had much to teach the modern world about risk management and governance. After the Achilli trial Newman started work on his philosophical apologia, The Grammar of Assent. One argument in that book has had such an impact on modern thinking about evidence and probability that it was quoted in full by Bruno de Finetti in Volume 1 of his 1974 Theory of Probability.

Supposes a thesis (e.g. the guilt of an accused man) is supported by a great deal of circumstantial evidence of different forms, but in agreement with each other; then even if each piece of evidence is in itself insufficient to produce any strong belief, the thesis is decisively strengthened by their joint effect.

De Finetti set out the detailed mathematics and called this the Cardinal Newman principle. It is fundamental to the modern concept of borrowing strength.

The standard means of defeating governance are all well known to oligarchs, regulator capture, “stake-driving” – taking actions outside the oversight of governance that will not be undone without engaging the regulator in controversy, “whipsawing” – promising A that approval will be forthcoming from B while telling B that A has relied upon her anticipated, and surely “uncontroversial”, approval. There are plenty of others. Robert Caro’s biography The Power Broker: Robert Moses and the Fall of New York sets out the locus classicus.

Governance functions need to exploit the borrowing strength of diverse data sources to identify misreporting and misconduct. And continually improve how they do that. The answer is trenchant and candid criticism of historical data. That’s the only data you have. A rigorous system of goal deployment and mature use of process behaviour charts delivers a potent stimulus to reluctant data sharers.

Things and actions are what they are and the consequences of them will be what they will be: why then should we desire to be deceived?

Bishop Joseph Butler

 

#executivetimeseries

ExecTS1OxfordDon Wheeler coined the term executive time series. I was just leaving court in Oxford the other day when I saw this announcement on a hoarding. I immediately thought to myself “#executivetimeseries”.

Wheeler introduced the phrase in his 2000 book Understanding Variation: The Key to Managing Chaos. He meant to criticise the habitual way that statistics are presented in business and government. A comparison is made between performance at two instants in time. Grave significance is attached as to whether performance is better or worse at the second instant. Well, it was always unlikely that it would be the same.

The executive time series has the following characteristics.

  • It as applied to some statistic, metric, Key Performance Indicator (KPI) or other measure that will be perceived as important by its audience.
  • Two time instants are chosen.
  • The statistic is quoted at each of the two instants.
  • If the latter is greater than the first then an increase is inferred. A decrease is inferred from the converse.
  • Great significance is attached to the increase or decrease.

Why is this bad?

At its best it provides incomplete information devoid of context. At its worst it is subject to gross manipulation. The following problems arise.

  • Though a signal is usually suggested there is inadequate information to infer this.
  • There is seldom explanation of how the time points were chosen. It is open to manipulation.
  • Data is presented absent its context.
  • There is no basis for predicting the future.

The Oxford billboard is even worse than the usual example because it doesn’t even attempt to tell us over what period the carbon reduction is being claimed.

Signal and noise

Let’s first think about noise. As Daniel Kahneman put it “A random event does not … lend itself to explanation, but collections of random events do behave in a highly regular fashion.” Noise is a collection of random events. Some people also call it common cause variation.

Imagine a bucket of thousands of beads. Of the beads, 80% are white and 20%, red. You are given a paddle that will hold 50 beads. Use the paddle to stir the beads then draw out 50 with the paddle. Count the red beads. Repeat this, let us say once a week, until you have 20 counts. The data might look something like this.

RedBeads1

What we observe in Figure 1 is the irregular variation in the number of red beads. However, it is not totally unpredictable. In fact, it may be one of the most predictable things you have ever seen. Though we cannot forecast exactly how many red beads we will see in the coming week, it will most likely be in the rough range of 4 to 14 with rather more counts around 10 than at the extremities. The odd one below 4 or above 14 would not surprise you I think.

But nothing changed in the characteristics of the underlying process. It didn’t get better or worse. The percentage of reds in the bucket was constant. It is a stable system of trouble. And yet measured variation extended between 4 and 14 red beads. That is why an executive time series is so dangerous. It alleges change while the underlying cause-system is constant.

Figure 2 shows how an executive time series could be constructed in week 3.

RedBeads2

The number of beads has increase from 4 to 10, a 150% increase. Surely a “significant result”. And it will always be possible to find some managerial initiative between week 2 and 3 that can be invoked as the cause. “Between weeks 2 and 3 we changed the angle of inserting the paddle and it has increased the number of red beads by 150%.”

But Figure 2 is not the only executive time series that the data will support. In Figure 3 the manager can claim a 57% reduction from 14 to 6. More than the Oxford banner. Again, it will always be possible to find some factor or incident supposed to have caused the reduction. But nothing really changed.

RedBeads3

The executive can be even more ambitious. “Between week 2 and 17 we achieved a 250% increase in red beads.” Now that cannot be dismissed as a mere statistical blip.

RedBeads4

#executivetimeseries

Data has no meaning apart from its context.

Walter Shewhart

Not everyone who cites an executive time series is seeking to deceive. But many are. So anybody who relies on an executive times series, devoid of context, invites suspicion that they are manipulating the message. This is Langian statistics. par excellence. The fallacy of What you see is all there is. It is essential to treat all such claims with the utmost caution. What properly communicates the present reality of some measure is a plot against time that exposes its variation, its stability (or otherwise) and sets it in the time context of surrounding events.

We should call out the perpetrators. #executivetimeseries

Techie note

The data here is generated from a sequence of 20 Bernoulli experiments with probability of “red” equal to 0.2 and 50 independent trials in each experiment.

FIFA and the Iron Law of Oligarchy

Йозеф Блаттер.jpgIn 1911, Robert Michels embarked on one of the earliest investigations into organisational culture. Michels was a pioneering sociologist, a student of Max Weber. In his book Political Parties he aggregated evidence about a range of trade unions and political groups, in particular the German Social Democratic Party.

He concluded that, as organisations become larger and more complex, a bureaucracy inevitably forms to take, co-ordinate and optimise decisions. It is the most straightforward way of creating alignment in decision making and unified direction of purpose and policy. Decision taking power ends up in the hands of a few bureaucrats and they increasingly use such power to further their own interests, isolating themselves from the rest of the organisation to protect their privilege. Michels called this the Iron Law of Oligarchy.

These are very difficult matters to capture quantitavely and Michels’ limited evidential sampling frame has more of the feel of anecdote than data. “Iron Law” surely takes the matter too far. However, when we look at the allegations concerning misconduct within FIFA it is tempting to feel that Michels’ theory is validated, or at least has gathered another anecdote to take the evidence base closer to data.

But beyond that, what Michels surely identifies is a danger that a bureaucracy, a management cadre, can successfully isolate itself from superior and inferior strata in an organisation, limiting the mobility of business data and fostering their own ease. The legitimate objectives of the organisation suffer.

Michels failed to identify a realistic solution, being seduced by the easy, but misguided, certainties of fascism. However, I think that a rigorous approach to the use of data can guard against some abuses without compromising human rights.

Oligarchs love traffic lights

I remember hearing the story of a CEO newly installed in a mature organisation. His direct reports had instituted a “traffic light” system to report status to the weekly management meeting. A green light meant all was well. An amber light meant that some intervention was needed. A red light signalled that threats to the company’s goals had emerged. At his first meeting, the CEO found that nearly all “lights” were green, with a few amber. The new CEO perceived an opportunity to assert his authority and show his analytical skills. He insisted that could not be so. There must be more problems and he demanded that the next meeting be an opportunity for honesty and confronting reality.

At the next meeting there was a kaleidoscope of red, amber and green “lights”. Of course, it turned out that the managers had flagged as red the things that were either actually fine or could be remedied quickly. They could then report green at the following meeting. Real career limiting problems were hidden behind green lights. The direct reports certainly didn’t want those exposed.

Openness and accountability

I’ve quoted Nobel laureate economist Kenneth Arrow before.

… a manager is an information channel of decidedly limited capacity.

Essays in the Theory of Risk-Bearing

Perhaps the fundamental problem of organisational design is how to enable communication of information so that:

  • Individual managers are not overloaded.
  • Confidence in the reliable satisfaction of process and organisational goals is shared.
  • Systemic shortfalls in process capability are transparent to the managers responsible, and their managers.
  • Leading indicators yield early warnings of threats to the system.
  • Agile responses to market opportunities are catalysed.
  • Governance functions can exploit the borrowing strength of diverse data sources to identify misreporting and misconduct.

All that requires using analytics to distinguish between signal and noise. Traffic lights offer a lousy system of intra-organisational analytics. Traffic light systems leave it up to the individual manager to decide what is “signal” and what “noise”. Nobel laureate psychologist Daniel Kahneman has studied how easily managers are confused and misled in subjective attempts to separate signal and noise. It is dangerous to think that What you see is all there is. Traffic lights offer a motley cloak to an oligarch wishing to shield his sphere of responsibility from scrutiny.

The answer is trenchant and candid criticism of historical data. That’s the only data you have. A rigorous system of goal deployment and mature use of process behaviour charts delivers a potent stimulus to reluctant data sharers. Process behaviour charts capture the development of process performance over time, for better or for worse. They challenge the current reality of performance through the Voice of the Customer. They capture a shared heuristic for characterising variation as signal or noise.

Individual managers may well prefer to interpret the chart with various competing narratives. The message of the data, the Voice of the Process, will not always be unambiguous. But collaborative sharing of data compels an organisation to address its structural and people issues. Shared data generation and investigation encourage an organisation to find practical ways of fostering team work, enabling problem solving and motivating participation. It is the data that can support the organic emergence of a shared organisational narrative that adds further value to the data and how it is used and developed. None of these organisational and people matters have generalised solutions but a proper focus on data drives an organisation to find practical strategies that work within their own context. And to test the effectiveness of those strategies.

Every week the press discloses allegations of hidden or fabricated assets, repudiated valuations, fraud, misfeasance, regulators blindsided, creative reporting, anti-competitive behaviour, abused human rights and freedoms.

Where a proper system of intra-organisational analytics is absent, you constantly have to ask yourself whether you have another FIFA on your hands. The FIFA allegations may be true or false but that they can be made surely betrays an absence of effective governance.

#oligarchslovetrafficlights

Target and the Targeteers

This blog appeared on the Royal Statistical Society website Statslife on 29 May 2014

DartboardJohn Pullinger, newly appointed head of the UK Statistics Authority, has given a trenchant warning about the “unsophisticated” use of targets. As reported in The Times (London) (“Targets could be skewing the truth, statistics chief warns”, 26 May 2014 – paywall) he cautions:

Anywhere we have had targets, there is a danger that they become an end in themselves and people lose sight of what they’re trying to achieve. We have numbers everywhere but haven’t been well enough schooled on how to use them and that’s where problems occur.

He goes on.

The whole point of all these things is to change behaviour. The trick is to have a sophisticated understanding of what will happen when you put these things out.

Pullinger makes it clear that he is no opponent of targets, but that in the hands of the unskilled they can create perverse incentives, encouraging behaviour that distorts the system they sought to control and frustrating the very improvement they were implemented to achieve.

For example, two train companies are being assessed by the regulator for punctuality. A train is defined as “on-time” if it arrives within 5 minutes of schedule. The target is 95% punctuality.
TrainTargets
Evidently, simple management by target fails to reveal that Company 1 is doing better than Company 2 in offering a punctual service to its passengers. A simple statement of “95% punctuality (punctuality defined as arriving within 5 minutes of timetable)” discards much of the information in the data.

Further, when presented with a train that has slipped outside the 5 minute tolerance, a manager held solely to the target of 95% has no incentive to stop the late train from slipping even further behind. Certainly, if it puts further trains at risk of lateness, there will always be a temptation to strip it of all priority. Here, the target is not only a barrier to effective measurement and improvement, it is a threat to the proper operation of the railway. That is the point that Pullinger was seeking to make about the behaviour induced by the target.

And again, targets often provide only a “snapshot” rather than the “video” that discloses the information in the data that can be used for planning and managing an enterprise.

I am glad that Pullinger was not hesitant to remind users that proper deployment of system measurement requires an appreciation of psychology. Nobel Laureate psychologist Daniel Kahneman warns of the inherent human trait of thinking that What you see is all there is (WYSIATI). On their own, targets do little to guard against such bounded rationality.

In support of a corporate programme of improvement and integrated in a culture of rigorous data criticism, targets have manifest benefits. They communicate improvement priorities. They build confidence between interfacing processes. They provide constraints and parameters that prevent the system causing harm. Harm to others or harm to itself. What is important is that the targets do not become a shield to weak managers who wish to hide their lack of understanding of their own processes behind the defence that “all targets were met”.

However, all that requires some sophistication in approach. I think the following points provide a basis for auditing how an organisation is using targets.

Risk assessment

Targets should be risk assessed, anticipating realistic psychology and envisaging the range of behaviours the targets are likely to catalyse.

Customer focus

Anyone tasked with operating to a target should be periodically challenged with a review of the Voice of the Customer and how their own role contributes to the organisational system. The target is only an aid to the continual improvement of the alignment between the Voice of the Process and the Voice of the Customer. That is the only game in town.

Borrowed validation

Any organisation of any size will usually have independent data of sufficient borrowing strength to support mutual validation. There was a very good recent example of this in the UK where falling crime statistics, about which the public were rightly cynical and incredulous, were effectively validated by data collection from hospital emergency departments (Violent crime in England and Wales falls again, A&E data shows).

Over-adjustment

Mechanisms must be in place to deter over-adjustment, what W Edwards Deming called “tampering”, where naïve pursuit of a target adds variation and degrades performance.

Discipline

Employees must be left in no doubt that lack of care in maintaining the integrity of the organisational system and pursuing customer excellence will not be excused by mere adherence to a target, no matter how heroic.

Targets are for the guidance of the wise. To regard them as anything else is to ask them to do too much.