Building targets, constructing behaviour

Recently, the press reported that UK construction company Bovis Homes Group PLC have run into trouble for encouraging new homeowners to move into unfinished homes and have therefore faced a barrage of complaints about construction defects. It turns out that these practices were motivated by a desire to hit ambitious growth targets. Yet it has all had a substantial impact on trading position and mark downs for Bovis shares.1

I have blogged about targets before. It is worth repeating what I said there about the thoughts of John Pullinger, head of the UK Statistics Authority. He gave a trenchant warning about the “unsophisticated” use of targets. He cautioned:2

Anywhere we have had targets, there is a danger that they become an end in themselves and people lose sight of what they’re trying to achieve. We have numbers everywhere but haven’t been well enough schooled on how to use them and that’s where problems occur.

He went on.

The whole point of all these things is to change behaviour. The trick is to have a sophisticated understanding of what will happen when you put these things out.

That message was clearly one that Bovis didn’t get. They legitimately adopted an ambitious growth target but they forgot a couple of things. They forgot that targets, if not properly risk assessed, can create perverse incentives to distort the system. They forgot to think about how manager behaviour might be influenced. Leaders need to be able to harness insights from behavioural economics. Further, a mature system of goal deployment imposes a range of metrics across a business, each of which has to contribute to the global organisational plan. It is no use only measuring sales if measures of customer satisfaction and input measures about quality are neglected or even deliberately subverted. An organisation needs a rich dashboard and needs to know how to use it.

Critically, it is a matter of discipline. Employees must be left in no doubt that lack of care in maintaining the integrity of the organisational system and pursuing customer excellence will not be excused by mere adherence to a target, no matter how heroic. Bovis was clearly a culture where attention to customer requirements was not thought important by the staff. That is inevitably a failure of leadership.

Compare and contrast

Bovis are an interesting contrast with supermarket chain Sainsbury’s who featured in a law report in the same issue of The Times.3 Bovis and Sainsbury’s clearly have very different approaches as to how they communicate to their managers what is important.

Sainsbury’s operated a rigorous system of surveying staff engagement which aimed to embrace all employees. It was “deeply engrained in Sainsbury’s culture and was a critical part of Sainsbury’s strategy”. An HR manager sent an email to five store managers suggesting that the rigour could be relaxed. Not all employees needed to be engaged, he said, and participation could be restricted to the most enthusiastic. That would have been a clear distortion of the process.

Mr Colin Adesokan was a senior manager who subsequently learned of the email. He asked the HR manager to explain what he had meant but received no response and the email was recirculated. Adesokan did nothing. When his inaction came to the attention of the chief executive, Adesokan was dismissed summarily for gross misconduct.

He sued his employer and the matter ended up in the Court of Appeal, Adesokan arguing that such mere inaction over a colleague’s behaviour was incapable of constituting gross misconduct. The Court of Appeal did not agree. They found that, given the significance placed by Sainsbury’s on the engagement process, the trial judge had been entitled to find that Adesokan had been seriously in dereliction of his duty. That failing constituted gross misconduct because it had the effect of undermining the trust and confidence in the employment relationship. Adesokan seemed to have been indifferent to what, in Sainsbury’s eyes, was a very serious breach of an important procedure. Sainsbury’s had been entitled to dismiss him summarily for gross misconduct.

That is process discipline. That is how to manage it.

Display constancy of purpose in communicating what is important. Do not turn a blind eye to breaches. Do not tolerate those who would turn the blind eye. When you combine that with mature goal deployment and sophistication as to how to interpret variation in metrics then you are beginning to master, at least some parts of, how to run a business.

References

  1. “Share price plunges as Bovis tries to rebuild customers’ trust” (paywall), The Times (London), 20 February 2017
  2. “Targets could be skewing the truth, statistics chief warns” (paywall), The Times (London), 26 May 2014
  3. Adesokan v Sainsbury’s Supermarkets Ltd [2017] EWCA Civ 22, The Times, 21 February 2017 (paywall)
Advertisements

Why would a lawyer blog about statistics?

Brandeis and Taylor… is a question I often get asked. I blog here about statistics, data, quality, data quality, productivity, management and leadership. And evidence. I do it from my perspective as a practising lawyer and some people find that odd. Yet it turns out that the collaboration between law and quantitative management science is a venerable one.

The grandfather of scientific management is surely Frederick Winslow Taylor (1856-1915). Taylor introduced the idea of scientific study of work tasks, using data and quantitative methods to redesign and control business processes.

Yet one of Taylorism’s most effective champions was a lawyer, Louis Brandeis (1856-1941). In fact, it was Brandeis who coined the term scientific management.

Taylor

Taylor was a production engineer who advocated a four stage strategy for productivity improvement.

  1. Replace rule-of-thumb work methods with methods based on a scientific study of the tasks.
  2. Scientifically select, train, and develop each employee rather than passively leaving them to train themselves.
  3. Provide “Detailed instruction and supervision of each worker in the performance of that worker’s discrete task”.1
  4. Divide work nearly equally between managers and workers, so that the managers apply scientific management principles to planning the work and the workers actually perform the tasks.

Points (3) and (4) tend to jar with millennial attitudes towards engagement and collaborative work. Conservative political scientist Francis Fukuyama criticised Taylor’s approach as “[epitomising] the carrying of the low-trust, rule based factory system to its logical conclusion”.2 I have blogged many times on here about the importance of trust.

However, (1) and (2) provided the catalyst for pretty much all subsequent management science from W Edwards Deming, Elton Mayo, and Taiichi Ohno through to Six Sigma and Lean. Subsequent thinking has centred around creating trust in the workplace as inseparable from (1) and (2). Peter Drucker called Taylor the “Isaac Newton (or perhaps the Archimedes) of the science of work”.

Taylor claimed substantial successes with his redesign of work processes based on the evidence he had gathered, avant la lettre, in the gemba. His most cogent lesson was to exhort managers to direct their attention to where value was created rather than to confine their horizons to monthly accounts and executive summaries.

Of course, Taylor was long dead before modern business analytics began with Walter Shewhart in 1924. There is more than a whiff of the #executivetimeseries about some of Taylor’s work. Once management had Measurement System Analysis and the Shewhart chart there would no longer be any hiding place for groundless claims to non-existent improvements.

Brandeis

Brandeis practised as a lawyer in the US from 1878 until he was appointed a Justice of the Supreme Court in 1916. Brandeis’ principles as a commercial lawyer were, “first, that he would never have to deal with intermediaries, but only with the person in charge…[and] second, that he must be permitted to offer advice on any and all aspects of the firm’s affairs”. Brandies was trenchant about the benefits of a coherent commitment to business quality. He also believed that these things were achieved, not by chance, but by the application of policy deployment.

Errors are prevented instead of being corrected. The terrible waste of delays and accidents is avoided. Calculation is substituted for guess; demonstration for opinion.

Brandeis clearly had a healthy distaste for muda.3 Moreover, he was making a land grab for the disputed high ground that these days often earns the vague and fluffy label strategy.

The Eastern Rate Case

The worlds of Taylor and Brandeis embraced in the Eastern Rate Case of 1910. The Eastern Railroad Company had applied to the Interstate Commerce Commission (“the ICC”) arguing that their cost base had inflated and that an increase in their carriage rates was necessary to sustain the business. The ICC was the then regulator of those utilities that had a monopoly element. Brandeis by this time had taken on the role of the People’s Lawyer, acting pro bono in whatever he deemed to be the public interest.

Brandeis opposed the rate increase arguing that the escalation in Eastern’s cost base was the result of management failure, not an inevitable consequence of market conditions. The cost of a monopoly’s ineffective governance should, he submitted, not be born by the public, nor yet by the workers. In court Brandeis was asked what Eastern should do and he advocated scientific management. That is where and when the term was coined.4

Taylor-Brandeis

The insight that profit cannot simply be wished into being by the fiat of cost plus, a fortiori of the hourly rate, is the Milvian bridge to lean.

But everyone wants to occupy the commanding heights of an integrated policy nurturing quality, product development, regulatory compliance, organisational development and the economic exploitation of customer value. What’s so special about lawyers in the mix? I think we ought to remind ourselves that if lawyers know about anything then we know about evidence. And we just might know as much about it as the statisticians, the engineers and the enforcers. Here’s a tale that illustrates our value.

Thereza Imanishi-Kari was a postdoctoral researcher in molecular biology at the Massachusetts Institute of Technology. In 1986 a co-worker raised inconsistencies in Imanishi-Kari’s earlier published work that escalated into allegations that she had fabricated results to validate publicly funded research. Over the following decade, the allegations grew in seriousness, involving the US Congress, the Office of Scientific Integrity and the FBI. Imanishi-Kari was ultimately exonerated by a departmental appeal board constituted of an eminent molecular biologist and two lawyers. The board heard cross-examination of the relevant experts including those in statistics and document examination. It was that cross-examination that exposed the allegations as without foundation.5

Lawyers can make a real contribution to discovering how a business can be run successfully. But we have to live the change we want to be. The first objective is to bring management science to our own business.

The black-letter man may be the man of the present but the man of the future is the man of statistics and the master of economics.

Oliver Wendell Holmes, 1897

References

  1. Montgomery, D (1989) The Fall of the House of Labor: The Workplace, the State, and American Labor Activism, 1865-1925, Cambridge University Press, p250
  2. Fukuyama, F (1995) Trust: The Social Virtues and the Creation of Prosperity, Free Press, p226
  3. Kraines, O (1960) “Brandeis’ philosophy of scientific management” The Western Political Quarterly 13(1), 201
  4. Freedman, L (2013) Strategy: A History, Oxford University Press, pp464-465
  5. Kevles, D J (1998) The Baltimore Case: A Trial of Politics, Science and Character, Norton

Regression done right: Part 3: Forecasts to believe in

There are three Sources of Uncertainty in a forecast.

  1. Whether the forecast is of “an environment that is sufficiently regular to be predictable”.1
  2. Uncertainty arising from the unexplained (residual) system variation.
  3. Technical statistical sampling error in the regression calculation.

Source of Uncertainty (3) is the one that fascinates statistical theorists. Sources (1) and (2) are the ones that obsess the rest of us. I looked at the first in Part 1 of this blog and, the second in Part 2. Now I want to look at the third Source of Uncertainty and try to put everything together.

If you are really most interested in (1) and (2), read “Prediction intervals” then skip forwards to “The fundamental theorem of forecasting”.

Prediction intervals

A prediction interval2 captures the range in which a future observation is expected to fall. Bafflingly, not all statistical software generates prediction intervals automatically so it is necessary, I fear, to know how to calculate them from first principles. However, understanding the calculation is, in itself, instructive.

But I emphasise that prediction intervals rely on a presumption that what is being forecast is “an environment that is sufficiently regular to be predictable”, that the (residual) business process data is exchangeable. If that presumption fails then all bets are off and we have to rely on a Cardinal Newman analysis. Of course, when I say that “all bets are off”, they aren’t. You will still be held to your existing contractual commitments even though your confidence in achieving them is now devastated. More on that another time.

Sources of variation in predictions

In the particular case of linear regression we need further to break down the third Source of Uncertainty.

  1. Uncertainty arising from the unexplained (residual) variation.
  2. Technical statistical sampling error in the regression calculation.
    1. Sampling error of the mean.
    2. Sampling error of the slope

Remember that we are, for the time being, assuming Source of Uncertainty (1) above can be disregarded. Let’s look at the other Sources of Uncertainty in turn: (2), (3A) and (3B).

Source of Variation (2) – Residual variation

We start with the Source of Uncertainty arising from the residual variation. This is the uncertainty because of all the things we don’t know. We talked about this a lot in Part 2. We are content, for the moment, that they are sufficiently stable to form a basis for prediction. We call this common cause variation. This variation has variance s2, where s is the residual standard deviation that will be output by your regression software.

RegressionResExpl2

Source of Variation (3A) – Sampling error in mean

To understand the next Source of Variation we need to know a little bit about how the regression is calculated. The calculations start off with the respective means of the X values ( X̄ ) and of the Y values ( Ȳ ). Uncertainty in estimating the mean of the Y , is the next contribution to the global prediction uncertainty.

An important part of calculating the regression line is to calculate the mean of the Ys. That mean is subject to sampling error. The variance of the sampling error is the familiar result from the statistics service course.

RegEq2

— where n is the number of pairs of X and Y. Obviously, as we collect more and more data this term gets more and more negligible.

RegressionMeanExpl

Source of Variation (3B) – Sampling error in slope

This is a bit more complicated. Skip forwards if you are already confused. Let me first give you the equation for the variance of predictions referable to sampling error in the slope.

RegEq3

This has now introduced the mysterious sum of squaresSXX. However, before we learn exactly what this is, we immediately notice two things.

  1. As we move away from the centre of the training data the variance gets larger.3
  2. As SXX gets larger the variance gets smaller.

The reason for the increasing sampling error as we move from the mean of X is obvious from thinking about how variation in slope works. The regression line pivots on the mean. Travelling further from the mean amplifies any disturbance in the slope.

RegressionSlopeExpl

Let’s look at where SXX comes from. The sum of squares is calculated from the Xs alone without considering the Ys. It is a characteristic of the sampling frame that we used to train the model. We take the difference of each X value from the mean of X, and then square that distance. To get the sum of squares we then add up all those individual squares. Note that this is a sum of the individual squares, not their average.

RegressionSXXTable

Two things then become obvious (if you think about it).

  1. As we get more and more data, SXX gets larger.
  2. As the individual Xs spread out over a greater range of XSXX gets larger.

What that (3B) term does emphasise is that even sampling error escalates as we exploit the edge of the original training data. As we extrapolate clear of the original sampling frame, the pure sampling error can quickly exceed even the residual variation.

Yet it is only a lower bound on the uncertainty in extrapolation. As we move away from the original range of Xs then, however happy we were previously with Source of Uncertainty (1), that the data was from “an environment that is sufficiently regular to be predictable”, then the question barges back in. We are now remote from our experience base in time and boundary. Nothing outside the original X-range will ever be a candidate for a comfort zone.

The fundamental theorem of prediction

Variances, generally, add up so we can sum the three Sources of Variation (2), (3A) and (3B). That gives the variance of an individual prediction, spred2. By an individual prediction I mean that somebody gives me an X and I use the regression formula to give them the (as yet unknown) corresponding Ypred.

RegEq4

It is immediately obvious that s2 is common to all three terms. However, the second and third terms, the sampling errors, can be made as small as we like by collecting more and more data. Collecting more and more data will have no impact on the first term. That arises from the residual variation. The stuff we don’t yet understand. It has variance s2, where s is the residual standard deviation that will be output by your regression software.

This, I say, is the fundamental theorem of prediction. The unexplained variation provides a hard limit on the precision of forecasts.

It is then a very simple step to convert the variance into a standard deviation, spred. This is the standard error of the prediction.4,5

RegEq5

Now, in general, where we have a measurement or prediction that has an uncertainty that can be characterised by a standard error u, there is an old trick for putting an interval round it. Remember that u is a measure of the variation in z. We can therefore put an interval around z as a number of standard errors, z±ku. Here, k is a constant of your choice. A prediction interval for the regression that generates prediction Ypred then becomes:

RegEq7

Choosing k=3 is very popular, conservative and robust.6,7 Other choices of k are available on the advice of a specialist mathematician.

It was Shewhart himself who took this all a bit further and defined tolerance intervals which contain a given proportion of future observations with a given probability.8 They are very much for the specialist.

Source of Variation (1) – Special causes

But all that assumes that we are sampling from “an environment that is sufficiently regular to be predictable”, that the residual variation is solely common cause. We checked that out on our original training data but the price of predictability is eternal vigilance. It can never be taken for granted. At any time fresh causes of variation may infiltrate the environment, or become newly salient because of some sensitising event or exotic interaction.

The real trouble with this world of ours is not that it is an unreasonable world, nor even that it is a reasonable one. The commonest kind of trouble is that it is nearly reasonable, but not quite. Life is not an illogicality; yet it is a trap for logicians. It looks just a little more mathematical and regular than it is; its exactitude is obvious, but its inexactitude is hidden; its wildness lies in wait.

G K Chesterton

The remedy for this risk is to continue plotting the residuals, the differences between the observed value and, now, the prediction. This is mandatory.

RegressionPBC2

Whenever we observe a signal of a potential special cause it puts us on notice to protect the forecast-user because our ability to predict the future has been exposed as deficient and fallible. But it also presents an opportunity. With timely investigation, a signal of a possible special cause may provide deeper insight into the variation of the cause-system. That in itself may lead to identifying further factors to build into the regression and a consequential reduction in s2.

It is reducing s2, by progressively accumulating understanding of the cause-system and developing the model, that leads to more precise, and more reliable, predictions.

Notes

  1. Kahneman, D (2011) Thinking, Fast and Slow, Allen Lane, p240
  2. Hahn, G J & Meeker, W Q (1991) Statistical Intervals: A Guide for Practitioners, Wiley, p31
  3. In fact s2/SXX is the sampling variance of the slope. The standard error of the slope is, notoriously, s/√SXX. A useful result sometimes. It is then obvious from the figure how variation is slope is amplified as we travel father from the centre of the Xs.
  4. Draper, N R & Smith, H (1998) Applied Regression Analysis, 3rd ed., Wiley, pp81-83
  5. Hahn & Meeker (1991) p232
  6. Wheeler, D J (2000) Normality and the Process Behaviour Chart, SPC Press, Chapter 6
  7. Vysochanskij, D F & Petunin, Y I (1980) “Justification of the 3σ rule for unimodal distributions”, Theory of Probability and Mathematical Statistics 21: 25–36
  8. Hahn & Meeker (1991) p231

On leadership and the Chinese contract

Hanyu trad simp.svgBetween 1958 and 1960, 67 of the 120 inhabitants of the Chinese village of Xiaogang starved to death. But Mao Zedong’s cruel and incompetent collectivist policies continued to be imposed into the 1970s. In December 1978, 18 of Xiaogang’s leading villagers met secretly and illegally to find a way out of borderline starvation and grinding poverty. The first person to speak up at the meeting was Yan Jingchang. He suggested that the village’s principal families clandestinely divide the collective farm’s land among themselves. Then each family should own what it grew. Jingchang drew up an agreement on a piece of paper for the others to endorse. Then he hid it in a bamboo tube in the rafters of his house. Had it been discovered Jingchang and the village would have suffered brutal punishment and reprisal as “counter-revolutionaries”.

The village prospered under Jingchang’s structure. During 1979 the village produced more than it had in the previous five years. That attracted the attention of the local Communist Party chief who summoned Jingchang for interrogation. Jingchang must have given a good account of what had been happening. The regional party chief became intrigued at what was going on and prepared a report on how the system could be extended across the whole region.

Mao had died in 1976 and, amid the emerging competitors for power, it was still uncertain as to how China would develop economically and politically. By 1979, Deng Xiaoping was working his way towards the effective leadership of China. The report into the region’s proposals for agricultural reform fell on his desk. His contribution to the reforms was that he did nothing to stop them.

I have often found the idea of leadership a rather dubious one and wondered whether it actually described anything. It was, I think, Goethe who remarked that “When an idea is wanting, a word can always be found to take its place.” I have always been tempted to suspect that that was the case with “leadership”. However, the Jingchang story did make me think.1 If there is such a thing as leadership then this story exemplifies it and it is worth looking at what was involved.

Personal risk

This leader took personal risks. Perhaps to do otherwise is to be a mere manager. A leader has, to use the graphic modern idiom, “skin in the game”. The risk could be financial or reputational, or to liberty and life.

Luck

Luck is the converse of risk. Real risks carry the danger of failure and the consequences thereof. Jingchang must have been aware of that. Napoleon is said to have complained, “I have plenty of clever generals but just give me a lucky one.2 Had things turned out differently with the development of Chinese history, the personalities of the party officials or Deng’s reaction, we would probably never have heard of Jingchang. I suspect though that the history of China since the 1970s would not have been very different.

The more I practice, the luckier I get.

Gary Player
South African golfer

Catalysing alignment

It was Jingchang who drew up the contract, who crystallised the various ideas, doubts, ambitions and thoughts into a written agreement. In law we say that a valid contract requires a consensus ad idem, a meeting of minds. Jingchang listened to the emerging appetite of the the other villagers and captured it in a form in which all could invest. I think that is a critical part of leadership. A leader catalyses alignment and models constancy of purpose.

However, this sort of leadership may not be essential in every system. Management scientists are enduringly fascinated by The Morning Star Company, a California tomato grower that functions without any conventional management. The particular needs and capabilities of the individuals interact to create an emergent order that evolves and responds to external drivers. Austrian economist Friedrich Hayek coined the term catallaxy for a self-organising system of voluntary co-operation and explained how such a thing could arise and sustain and what its benefits to society.3

But sometimes the system needs the spark of a leader like Jingchang who puts himself at risk and creates a vivid vision of the future state against which followers can align.

Deng kept out of the way. Jingchang put himself on the line. The most important characteristic of leadership is the sagacity to know when the system can manage itself and when to intervene.

References

  1. I have this story from Matt Ridley (2015) The Evolution of Everything, Fourth Estate
  2. Apocryphal I think.
  3. Hayek, F A (1982) Law, Legislation, and Liberty, vol.2, Routledge, pp108–9

Imagination, data and leadership

I had an intriguing insight into the nature of imagination the other evening when I was watching David Eagleman’s BBC documentary The Brain which you can catch on iPlayer until 27 February 2016 if you have a UK IP address.

Eagleman told the strange story of Henry Molaison. Molaison suffered from debilitating epilepsy following a bicycle accident when he was nine years old. At age 27, Molaison underwent radical brain surgery that removed, all but completely, his hippocampi. The intervention stabilised the epilepsy but left Molaison’s memory severely impaired. Though he could recall his childhood, Molaison had no recall of events in the years leading up to his surgery and was unable to create new long-term memories. The case was important evidence for the theory that the hippocampus is critical to memory function. Molaison, having lost his, was profoundly compromised as to recall.

But Eagleman’s analysis went further and drew attention to a passage in a interview with Molaison later in his life.1 Though his presenting symptoms post-intervention were those of memory loss, Molaison also encountered difficulty in talking about what he would do the following day. Eagleman advances the theory that the hippocampus is critical, not only to memory, but to imagining the future. The systems that create memories are common to those that generate a model by which we can forecast, predict and envision novel outcomes.

I blogged about imagination back in November and how it was pivotal to core business activities from invention and creativity to risk management and root cause analysis. If Eagleman’s theory about the entanglement of memory and imagination is true then it might have profound implications for management. Perhaps our imagination will only function as well as our memory. That was, apparently, the case with Molaison. It could just be that an organisation’s ability to manage the future depends upon the same systems as those by which it critically captures the past.

That chimes with a theory of innovation put forward by W Brian Arthur of the Santa Fe Institute.2 Arthur argues that purportedly novel inventions are no more than combinations of known facts. There are no great leaps of creativity, just the incremental variation of a menagerie of artifacts and established technologies. Ideas similar to Arthur’s have been advanced by Matt Ridley,3,4 and Steven Berlin Johnson.5 Only mastery of the present exposes the opportunities to innovate. They say.

Data

This all should be no surprise to anybody experienced in business improvement. Diligent and rigorous criticism of historical data is the catalyst of change and the foundation of realising a vivid future. This is a good moment to remind ourselves of the power of the process behaviour chart in capturing learning and creating an organisational memory.

GenericPBC

The process behaviour chart provides a cogent record of the history of operation of a business process, its surprises and disappointments, existential risks and epochs of systematic productivity. It records attempted business solutions, successful, failed, temporary and partial work-rounds. It segregates signal from noise. It suggests realistic bounds on prediction. It is the focus of inclusive discussion about what the data means. It is the live report of experimentation and investigation, root cause analysis and problem solving. It matches data with its historical context. It is the organisation’s memory of development of a business process, and the people who developed it. It is the basis for creating the future.

If you are not familiar with how process behaviour charts work in this context then have a look at Don Wheeler’s example of A Japanese Control Chart.6

Leadership

Tim Harford tries to take the matter further.7 On Harford’s account of invention, “trial and error” consistently outperform “expert leadership” through a Darwinian struggle of competing ideas. The successful innovations, Harford says, propagate by adoption and form an ecology of further random variation, out of which the best ideas emergently repeat the cycle or birth and death. Of course, Leo Tolstoy wrote War and Peace, his “airport novel” avant la lettre, (also currently being dramatised by the BBC) to support exactly this theory of history. In Tolsoy’s intimate descriptions of the Battles of Austerlitz and Borodino, combatants lose contact with their superiors, battlefields are obscured by smoke from the commanding generals, individuals act on impulse and in despite of field discipline. How, Tolstoy asked in terms, could anyone claim to be the organising intelligence of victory or the culpable author of defeat?

However, I think that a view of war at odds with Tolstoy’s is found in the career of General George Marshall.8 Marshall rose to the rank of General of the Army of the USA as an expert in military logistics rather than as a commander in the field. Reading a biography of Marshall presents an account of war as a contest of supply chains. The events of the theatre of operations may well be arbitrary and capricious. It was the delivery of superior personnel and materiel to the battlefield that would prove decisive. That does not occur without organisation and systematic leadership. I think.

Harford and the others argue that, even were the individual missing from history, the innovation would still have occurred. But even though it could have been anyone, it still had to be someone. And what that someone had to provide was leadership to bring the idea to market or into operation. We would still have motor cars without Henry Ford and tablet devices without Steve Jobs but there would have been two other names who had put themselves on the line to create something out of nothing.

In my view, the evolutionary model of innovation is interesting but stretches a metaphor too far. Innovation demands leadership. The history of barbed wire is instructive.9 In May 1873, at a county fair in Illinois, Henry B Rose displayed a comical device to prevent cattle beating down primitive fencing, a “wooden strip with metallic points”. The device hung round the cattle’s horns and any attempts to butt the fence drove the spikes into the beast’s head. It didn’t catch on but at the fair that day were Joseph Glidden, Isaac L Ellwood and Jacob Haish. The three went on, within a few months, each to invent barbed wire. The winning memes often come from failed innovation.

Leadership is critical, not only in scrutinising innovation but in organising the logistics that will bring it to market.10 More fundamentally, leadership is pivotal in creating the organisation in which diligent criticism of historical data is routine and where it acts as a catalyst for innovation.11

References

  1. http://www.sciencemuseum.org.uk/visitmuseum_OLD/galleries/who_am_i/~/media/8A897264B5064BC7BE1D5476CFCE50C5.ashx, retrieved 29 January 2016, at p5
  2. Arthur, W B (2009) The Nature of Technology: What it is and How it Evolves, The Free Press/ Penguin Books.
  3. Ridley, M (2010) The Rational Optimist, Fourth Estate
  4. — (2015) The Evolution of Everything, Fourth Estate
  5. Johnson, S B (2010) Where Good Ideas Come From: The Seven Patterns of Innovation, Penguin
  6. Wheeler, D J (1992) Understanding Statistical Process Control, SPC Press
  7. Harford, T (2011) Adapt: Why Success Always Starts with Failure, Abacus
  8. Cray, E (2000) General of the Army: George C. Marshall, Soldier and Statesman, Cooper Square Press
  9. Krell, A (2002) The Devil’s Rope: A Cultural History of Barbed Wire, Reaktion Books
  10. Armytage, W H G (1976) A Social History of Engineering, 4th ed., Faber
  11. Nonaka, I & Takeuchi, H (1995) The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation, Oxford University Press

First thoughts on VW’s emmissions debacle

It is far too soon to tell exactly what went on at VW, in the wider motor industry, within the respective regulators and within governments. However, the way that the news has come out, and the financial and operational impact that it is likely to have, are enough to encourage all enterprises to revisit their risk management, governance and customer reputation management policies. Corporate scandals are not a new phenomenon, from the collapse of the Medici Bank in 1494, Warren Hastings’ alleged despotism in the British East India Company, down to the FIFA corruption allegations that broke earlier this year. Organisational scandals are as old as organisations. The bigger the organisations get, the bigger the scandals are going to be.

Normal Scandals

In 1984, Scott Perrow published his pessimistic analysis of what he saw as the inevitability of Normal Accidents in complex technologies. I am sure that there is a market for a book entitled Normal Scandals: Living with High-Risk Organisational Structures. But I don’t share Perrow’s pessimism. Life is getting safer. Let’s adopt the spirit of continual improvement to make investment safer too. That’s investment for those of us trying to accumulate a modest portfolio for retirement. Those who aspire to join the super rich will still have to take their chances.

I fully understand that organisations sometimes have to take existential risks to stay in business. The development of Rolls-Royce’s RB211 aero-engine well illustrates what happens when a manufacturer finds itself with proven technologies that are inadequately aligned with the Voice of the Customer. The market will not wait while the business catches up. There is time to develop a response but only if that solution works first time. In the case of Rolls-Royce it didn’t and insolvency followed. However, there was no alternative but to try.

What happened at VW? I just wonder whether the Iron Law of Oligarchy was at work. To imagine that a supervisory board sits around discussing the details of engine management software is naïve. In fact it was the RB211 crisis that condemned such signal failures of management to delegate. Do VW’s woes flow from a decision taken by a middle manager, or a blind eye turned, that escaped an inadequate system of governance? Perhaps a short term patch in anticipation of an ultimate solution?

Cardinal Newman’s contribution to governance theory

John Henry Newman learned about risk management the hard way. Newman was an English Anglican divine who converted to the Catholic Church in 1845. In 1850 Newman became involved in the controversy surrounding Giacinto Achilli, a priest expelled from the Catholic Church for rape and sexual assault but who was making a name from himself in England as a champion of the protestant evangelical cause. Conflict between Catholic and protestant was a significant feature of the nineteenth century English political landscape. Newman was minded to ensure that Achilli’s background was widely known. He took legal advice from counsel James Hope-Scott about the risks of a libel action from Achilli. Hope-Scott was reassuring and Newman published. The publication resulted in Newman’s prosecution and conviction for criminal libel.

Speculation about what legal advice VW have received as to their emissions strategy would be inappropriate. However, I trust that, if they imagined they were externalising any risk thereby, they checked the value of their legal advisors’ professional indemnity insurance.

Newman certainly seems to have learned his lesson and subsequently had much to teach the modern world about risk management and governance. After the Achilli trial Newman started work on his philosophical apologia, The Grammar of Assent. One argument in that book has had such an impact on modern thinking about evidence and probability that it was quoted in full by Bruno de Finetti in Volume 1 of his 1974 Theory of Probability.

Supposes a thesis (e.g. the guilt of an accused man) is supported by a great deal of circumstantial evidence of different forms, but in agreement with each other; then even if each piece of evidence is in itself insufficient to produce any strong belief, the thesis is decisively strengthened by their joint effect.

De Finetti set out the detailed mathematics and called this the Cardinal Newman principle. It is fundamental to the modern concept of borrowing strength.

The standard means of defeating governance are all well known to oligarchs, regulator capture, “stake-driving” – taking actions outside the oversight of governance that will not be undone without engaging the regulator in controversy, “whipsawing” – promising A that approval will be forthcoming from B while telling B that A has relied upon her anticipated, and surely “uncontroversial”, approval. There are plenty of others. Robert Caro’s biography The Power Broker: Robert Moses and the Fall of New York sets out the locus classicus.

Governance functions need to exploit the borrowing strength of diverse data sources to identify misreporting and misconduct. And continually improve how they do that. The answer is trenchant and candid criticism of historical data. That’s the only data you have. A rigorous system of goal deployment and mature use of process behaviour charts delivers a potent stimulus to reluctant data sharers.

Things and actions are what they are and the consequences of them will be what they will be: why then should we desire to be deceived?

Bishop Joseph Butler

 

Amazon II: The sales story

Jeff Bezos' iconic laugh.jpgI recently commented on an item in the New York Times about Amazon’s pursuit of “rigorous data driven management”. Dina Vaccari, one of the employees cited in the original New York Times article, has taken the opportunity to tell her own story in this piece. I found it enlightening as to what goes on at Amazon. Of course, it is only another anecdote from a former employee, a data source of notoriously limited quality. However, as Arthur Koestler once observed:

Without the hard little bits of marble which are called ‘facts’ or ‘data’ one cannot compose a mosaic; what matters, however, are not so much the individual bits, but the successive patterns into which you arrange them, then break them up and rearrange them.

Vaccari’s role was to sell Amazon gift cards. The measure of her success was how many she sold. Vaccari had read Timothy Ferriss’ transgressive little book The 4-Hour Workweek. She decided to employ a subcontractor from Chennai, India to generate for her 100 leads daily for $10. The idea worked out well. Another illustration of the law of comparative advantage.

Vaccari them emailed the leads, not with the standard email that she had been instructed to use by Amazon, but with a formula of her own. Vacarri claims a 10 to 50% response rate. She then followed up using her traditional sales skills, exceeding her sales target and besting the rest of the sales team.

That drew attention from her supervisor. Not unnaturally he wanted to capture good practice. When he saw Vaccari’s non-standard email he was critical. We now know that process discipline is important at Amazon. Nothing wrong with that though if you really want to exercise your mind on the topic you would do well to watch the Hollywood movie Crimson Tide.

What is more interesting is that, when Vaccari answered the criticism by pointing to her response and sales figures, the supervisor retorted that this was “just luck”.

So there we have it. Somebody made a change and the organisation couldn’t agree whether or not it was an improvement. Vaccari said she saw a signal. Her supervisor said that it was just noise.

The supervisor’s response was particularly odd as he was shadowing Vacarri because of his favourable perception of her performance. It is as though his assessment as to whether Vacarri’s results were signal or noise depended on his approval or disapproval of how she had achieved them. It certainly seems that this is not normative behaviour at Amazon. Vaccari criticises her supervisor for failing to display Amazon Leadership Principles. The exchange illustrates what happens if an organisation generates data but is then unable to turn it into a reliable basis for action because there is no systematic and transparent method for creating a consensus around what is signal and what, noise. Vicarri’s exchange with her supervisor is reassuring in that both recognised that there is an important distinction. Vacarri knew that a signal should be a tocsin for action, in this case to embed a successful innovation through company wide standardisation. Her supervisor knew that to mistake noise for a signal would lead to a degraded process performance. Or at least he hid behind that to project his disapproval. Vacarri’s recall of the incident makes her “cringe”. Numbers aren’t just about numbers.

Trenchant data criticism, motivated by the rigorous segregation of signal and noise, is the catalyst of continual improvement in sales, product quality, economic efficiency and market agility.

The goal is not to be driven by data but to be led by the insights it yields.