The minute you have a back-up plan, you’ve admitted you’re not going to succeed.
Good advice? I think not! Let’s review some science.
Confidence and trustworthiness
As far back as the 1970s, psychologists carried out a series of experiments on individual confidence.1 They took a sample of people and set each of them a series of general knowledge questions. The participants were to work independently of each other. The questions were things like What is the capital city of France? The respondents had, not only to do their best to answer the question, but also then to state the probability that they had answered correctly.
As a headline to their results the researchers found that, of all those answers in the aggregate about which people said they were 100% sure that they had answered correctly, more than 20% were answered incorrectly.
Now, we know that people who go around assigning 100% probabilities to things that happen only 80% of the time are setting themselves up for inevitable financial loss.2 Yet, this sort of over confidence in the quality and reliability of our individual, internal cognitive processes has been identified and repeated over multiple experiments and sundry real life situations.
There is even a theory that the only people whose probabilities are reliably calibrated against frequencies are those suffering from clinically diagnosed depression. The theory of depressive realism remains, however, controversial.
Psychologists like Daniel Kahneman have emphasised that human reasoning is limited by a bounded rationality. All our cognitive processes are built on individual experience, knowledge, cultural assumptions, habits for interpreting data (good, bad and indifferent) … everything. All those things are aggregated imperfectly, incompletely and partially. Nobody can can take the quality of their own judgments for granted.
Kahneman points out that, in particular, wherever individuals engage sophisticated techniques of analysis and rationalisation, and especially those tools that require long experience, education and training to acquire, there is over confidence in outcomes.3 Kahneman calls this the illusion of validity. The more thoroughly we construct an internally consistent narrative for ourselves, the more we are seduced by it. And it is instinctive for humans to seek such cogent models for experience and aspiration. Kahneman says:4
Confidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it. It is wise to take admissions of uncertainty seriously, but declarations of high confidence mainly tell you that an individual has constructed a coherent story in their mind, not necessarily that the story is true.
If illusion is the spectre of confidence then having a Plan B seems like a good idea. Of course, Holmes is correct that having a Plan B will tempt you to use it. When disappointments accumulate, in escalating costs, stagnating revenues or emerging political risks, it is very tempting to seek the repose of a lesser ambition or even a managed mitigation of residual losses.
But to proscribe a Plan B in order to motivate success is to display the risk appetite of a Kamikaze pilot. Sometimes reality tells you that your business plan is predicated on a false prospectus. Given the science of over confidence and the narrative of bounded rationality, we know that it will happen a lot of the time.
Holmes is also correct that disappointment is, in itself, no reason to change plan. What she neglects is that there is a phenomenon that does legitimately invite change: a surprise. It is a surprise that alerts us to an inconsistency between the real world and our design. A surprise ought to make us go back to our working business plan and examine the assumptions against the real world data. A switch to Plan B is not inevitable. There may be other means of mitigation: Act, Adapt or Abandon. The surprise could even be an opportunity to be grasped. The Plan B doesn’t have to be negative.
How then are we to tell a surprise from a disappointment? With a Shewhart chart of course. The chart has the benefits that:
- Narrative building is shared not personal.
- Narratives are challenged with data and context.
- Surprise and disappointment are distinguished.
- Predictive power is tested.
Analysis versus “gut feel”
I suppose that what lies behind Holmes’ quote is the theory that commitment and belief can, in themselves, overcome opposing forces, and that a commitment borne of emotion and instinctive confidence is all the more potent. Here is an old Linkedin post that caught my eye a while ago celebrating the virtues of “gut feel”.
The author believed that gut feel came from experience and individuals of long exposure to a complex world should be able to trump data with their intuition. Intuition forms part of what Kahneman called System 1 thinking which he contrasted with the System 2 thinking that we engage in when we perform careful and lengthy data analysis (we hope).5 System 1 thinking can be valuable. Philip Tetlock, a psychologist who researched the science of forecasting, noted this.6
Whether intuition generates delusion or insight depends on whether you work in a world full of valid cues you can unconsciously register for future use.
In fact, whether the world is full of the sorts of valid clues that support useful predictions is exactly the question that Shewhart charts are designed to answer. Whether we make decisions on data or on gut feel, either can mislead us with the illusion of validity.
Again, what the chart supports is the continual testing of the reliability and utility of intuitions. Gut feel is not forbidden but be sure that the successive predictions and revisions will be recorded and subjected to the scrutiny of the Shewhart chart. Impressive records of forecasting will form the armature of a continually developing shared narrative of organisational excellence. Unimpressive forecasters will have to yield ground.
- Lichtenstein, S et al. (1982) “Calibration of probabilities: The state of the art to 1980” in Kahneman, D et al. Judgment Under Uncertainty: Heuristics and Biases, Cambridge University Press
- De Finetti, B (1974) Theory of Probability: A Critical Introductory Treatment, Vol.1, trans. Machi, A & Smith, A; Wiley, p113
- Kahneman, D (2011) Thinking, Fast and Slow, Allen Lane, p217
- — p212
- — pp19-24
- Tetlock, P (2015) Superforecasting: The Art and Science of Prediction, Crown Publishing, Kindle loc 1031