How to predict floods

File:Llanrwst Floods 2015 1.ogvI started my grown-up working life on a project seeking to predict extreme ocean currents off the north west coast of the UK. As a result I follow environmental disasters very closely. I fear that it’s natural that incidents in my own country have particular salience. I don’t want to minimise disasters elsewhere in the world when I talk about recent flooding in the north of England. It’s just that they are close enough to home for me to get a better understanding of the essential features.

The causes of the flooding are multi-factorial and many of the factors are well beyond my expertise. However, The Times (London) reported on 28 December 2015 that “Some scientists say that [the UK Environment Agency] has been repeatedly caught out by the recent heavy rainfall because it sets too much store by predictions based on historical records” (p7). Setting store by predictions based on historical records is very much where my hands-on experience of statistics began.

The starting point of prediction is extreme value theory, developed by Sir Ronald Fisher and L H C Tippett in the 1920s. Extreme value analysis (EVA) aims to put probabilistic bounds on events outside the existing experience base by predicating that such events follow a special form of probability distribution. Historical data can be used to fit such a distribution using the usual statistical estimation methods. Prediction is then based on a double extrapolation: firstly in the exact form of the tail of the extreme value distribution and secondly from the past data to future safety. As the old saying goes, “Interpolation is (almost) always safe. Extrapolation is always dangerous.”

EVA rests on some non-trivial assumptions about the process under scrutiny. No statistical method yields more than was input in the first place. If we are being allowed to extrapolate beyond the experience base then there are inevitably some assumptions. Where the real world process doesn’t follow those assumptions the extrapolation is compromised. To some extent there is no cure for this other than to come to a rational decision about the sensitivity of the analysis to the assumptions and to apply a substantial safety factor to the physical engineering solutions.

One of those assumptions also plays to the dimension of extrapolation from past to future. Statisticians often demand that the data be independent and identically distributed. However, that is a weird thing to demand of data. Real world data is hardly ever independent as every successive observation provides more information about the distribution and alters the probability of future observations. We need a better idea to capture process stability.

Historical data can only be projected into the future if it comes from a process that is “sufficiently regular to be predictable”. That regularity is effectively characterised by the property of exchangeability. Deciding whether data is exchangeable demands, not only statistical evidence of its past regularity, but also domain knowledge of the physical process that it measures. The exchangeability must continue into the predicable future if historical data is to provide any guide. In the matter of flooding, knowledge of hydrology, climatology, planning and engineering, law, in addition to local knowledge about economics and infrastructure changes already in development, is essential. Exchangeability is always a judgment. And a critical one.

Predicting extreme floods is a complex business and I send my good wishes to all involved. It is an example of something that is essentially a team enterprise as it demands the co-operative inputs of diverse sets of experience and skills.

In many ways this is an exemplary model of how to act on data. There is no mechanistic process of inference that stands outside a substantial knowledge of what is being measured. The secret of data analysis, which often hinges on judgments about exchangeability, is to visualize the data in a compelling and transparent way so that it can be subjected to collaborative criticism by a diverse team.

Imagine …

Ben Bernanke official portrait.jpgNo, not John Lennon’s dreary nursery rhyme for hippies.

In his memoir of the 2007-2008 banking crisis, The Courage to ActBen Benanke wrote about his surprise when the crisis materialised.

We saw, albeit often imperfectly, most of the pieces of the puzzle. But we failed to understand – “failed to imagine” might be a better phrase – how those pieces would fit together to produce a financial crisis that compared to, and arguably surpassed, the financial crisis that ushered in the Great Depression.

That captures the three essentials of any attempt to foresee a complex future.

  • The pieces
  • The fit
  • Imagination

In any well managed organisation, “the pieces” consist of the established Key Performance Indicators (KPIs) and leading measures. Diligent and rigorous criticism of historical data using process behaviour charts allows departures from stability to be identified timeously. A robust and disciplined system of management and escalation enables an agile response when special causes arise.

Of course, “the fit” demands a broader view of the data, recognising interactions between factors and the possibility of non-simple global responses remote from a locally well behaved response surface. As the old adage goes, “Fit locally. Think globally.” This is where the Cardinal Newman principle kicks in.

“The pieces” and “the fit”, taken at their highest, yield a map of historical events with some limited prediction as to how key measures will behave in the future. Yet it is common experience that novel factors persistently invade. The “bow wave” of such events will not fit a recognised pattern where there will be a ready consensus as to meaning, mechanism and action. These are the situations where managers are surprised by rapidly emerging events, only to protest, “We never imagined …”.

Nassim Taleb’s analysis of the financial crisis hinged on such surprises and took him back to the work of British economist G L S Shackle. Shackle had emphasised the importance of imagination in economics. Put at its most basic, any attempt to assign probabilities to future events depends upon the starting point of listing the alternatives that might occur. Statisticians call it the sample space. If we don’t imagine some specific future we won’t bother thinking about the probability that it might come to be. Imagination is crucial to economics but it turns out to be much more pervasive as an engine of improvement that at first is obvious.

Imagination and creativity

Frank Whittle had to imagine the jet engine before he could bring it into being. Alan Turing had to imagine the computer. They were both fortunate in that they were able to test their imagination by construction. It was all realised in a comparatively short period of time. Whittle’s and Turing’s respective imaginations were empirically verified.

What is now proved was once but imagined.

William Blake

Not everyone has had the privilege of seeing their imagination condense into reality within their lifetime. In 1946, Sir George Paget Thomson and Moses Blackman imagined a plentiful source of inexpensive civilian power from nuclear fusion. As of writing, prospects of a successful demonstration seem remote. Frustratingly, as far as I can see, the evidence still refuses to tip the balance as to whether future success is likely or that failure is inevitable.

Something as illusive as imagination can have a testable factual content. As we know, not all tests are conclusive.

Imagination and analysis

Imagination turns out to be essential to something as prosaic as Root Cause Analysis. And essential in a surprising way. Establishing an operative cause of a past event is an essential task in law and engineering. It entails the search for a counterfactual, not what happened but what might have happened to avoid the  regrettable outcome. That is inevitably an exercise in imagination.

In almost any interesting situation there will be multiple imagined pasts. If there is only one then it is time to worry. Sometimes it is straightforward to put our ideas to the test. This is where the Shewhart cycle comes into its own. In other cases we are in the realms of uncomfortable science. Sometimes empirical testing is frustrated because the trail has gone cold.

The issues of counterfactuals, Root Cause Analysis and causation have been explored by psychologists Daniel Kahneman1 and Ruth Byrne2 among others. Reading their research is a corrective to the optimistic view that Root Cause analysis is some sort of inevitably objective process. It is distorted by all sorts of heuristics and biases. Empirical testing is vital, if only through finding some data with borrowing strength.

Imagine a millennium bug

In 1984, Jerome and Marilyn Murray published Computers in Crisis in which they warned of a significant future risk to global infrastructure in telecommunications, energy, transport, finance, health and other domains. It was exactly those areas where engineers had been enthusiastic to exploit software from the earliest days, often against severe constraints of memory and storage. That had led to the frequent use of just two digits to represent a year, “71” for 1971, say. From the 1970s, software became more commonly embedded in devices of all types. As the year 2000 approached, the Murrays envisioned a scenario where the dawn of 1 January 2000 was heralded by multiple system failures where software registers reset to the year 1900, frustrating functions dependent on timing and forcing devices into a fault mode or a graceless degradation. Still worse, systems may simply malfunction abruptly and without warning, the only sensible signal being when human wellbeing was compromised. And the ruinous character of such a threat would be that failure would be inherently simultaneous and global, with safeguarding systems possibly beset with the same defects as the primary devices. It was easy to imagine a calamity.

Risk matrixYou might like to assess that risk yourself (ex ante) by locating it on the Risk Assessment Matrix to the left. It would be a brave analyst who would categorise it as “Low”, I think. Governments and corporations were impressed and embarked on a massive review of legacy software and embedded systems, estimated to have cost around $300 billion at year 2000 prices. A comprehensive upgrade programme was undertaken by nearly all substantial organisations, public and private.

Then, on 1 January 2000, there was no catastrophe. And that caused consternation. The promoters of the risk were accused of having caused massive expenditure and diversion of resources against a contingency of negligible impact. Computer professionals were accused, in terms, of self-serving scare mongering. There were a number of incidents which will not have been considered minor by the people involved. For example, in a British hospital, tests for Down’s syndrome were corrupted by the bug resulting in contra-indicated abortions and births. However, there was no global catastrophe.

This is the locus classicus of a counterfactual. Forecasters imagined a catastrophe. They persuaded others of their vision and the necessity of vast expenditure in order to avoid it. The preventive measures were implemented at great costs. The Catastrophe did not occur. Ex post, the forecasters were disbelieved. The danger had never been real. Even Cassandra would have sympathised.

Critics argued that there had been a small number of relatively minor incidents that would have been addressed most economically on a “fix on failure” basis. Much of this turns out to be a debate about the much neglected column of the risk assessment headed “Detectability”. Where a failure will inflict immediate pain, it is so much more critical as to management and mitigation than a failure that will present the opportunity for detection and protection in advance of a broader loss. Here, forecasting Detectability was just as important as Probability and Consequences in arriving at an economic strategy for management.

It is the fundamental paradox of risk assessment that, where control measures eliminate a risk, it is not obvious whether the benign outcome was caused by the control or whether the risk assessment was just plain wrong and the risk never existed. Another counterfactual. Again, finding some alternative data with borrowing strength can help though it will ever be difficult to build a narrative appealing to a wide population. There are links to some sources of data on the Wikipedia article about the bug. I will leave it to the reader.

Imagine …

Of course it is possible to find this all too difficult and to adopt the Biblical outlook.

I returned, and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all.

Ecclesiastes 9:11
King James Bible

That is to adopt the outlook of the lady on the level crossing. Risk professionals look for evidence that their approach works.

The other day, I was reading the annual report of the UK Health and Safety Executive (pdf). It shows a steady improvement in the safety of people at work though oddly the report is too coy to say this in terms. The improvement occurs over the period where risk assessment has become ubiquitous in industry. In an individual work activity it will always be difficult to understand whether interventions are being effective. But using the borrowing strength of the overall statistics there is potent evidence that risk assessment works.

References

  1. Kahneman, D & Tversky, A (1979) “The simulation heuristic”, reprinted in Kahneman et al. (1982) Judgment under Uncertainty: Heuristics and Biases, Cambridge, p201
  2. Byrne, R M J (2007) The Rational Imagination: How People Create Alternatives to Reality, MIT Press

Superforecasting – the thing that TalkTalk didn’t do

I have just been reading Superforecasting: The Art and Science of Prediction by Philip Tetlock and Dan Gardner. The book has attracted much attention and enthusiasm in the press. It makes a bold claim that some people, superforecasters, though inexpert in the conventional sense, are possessed of the ability to make predictions with a striking degree of accuracy, that those individuals exploit a strategy for forecasting applicable even to the least structured evidence, and that the method can be described and learned. The book summarises results of a study sponsored by US intelligence agencies as part of the Good Judgment Project but, be warned, there is no study data in the book.

I haven’t found any really good distinction between forecasting and prediction so I might swap between the two words arbitrarily here.

What was being predicted?

The forecasts/ predictions in question were all in the field of global politics and economics. For example, a question asked in January 2011 was:

Will Italy restructure or default on its debt by 31 December 2011?

This is a question that invited a yes/ no answer. However, participants were encouraged to answer with a probability, a number between 0% and 100% inclusive. If they were certain of the outcome they could answer 100%, if certain that it would not occur, 0%. The participants were allowed, I think encouraged, to update and re-update their forecasts at any time. So, as far as I can see, a forecaster who predicted 60% for Italian debt restructuring in January 2011 could revise that to 0% in December, even up to the 30th. Each update was counted as a separate forecast.

The study looked for “Goldilocks” problems, not too difficult, not to easy but just right.

Bruno de Finetti was very sniffy about using the word “prediction” in this context and preferred the word “prevision”. It didn’t catch on.

Who was studied?

The study was conducted by means of a tournament among volunteers. I gather that the participants wanted to be identified and thereby personally associated with their scores. Contestants had to be college graduates and, as a preliminary, had to complete a battery of standard cognitive and general knowledge tests designed to characterise their given capabilities. The competitors in general fell in the upper 30 percent of the general population for intelligence and knowledge. When some book reviews comment on how the superforecasters included housewives and unemployed factory workers I think they give the wrong impression. This was a smart, self-selecting, competitive group with an interest in global politics. As far as I can tell, backgrounds in mathematics, science and computing were typical. It is true that most were amateurs in global politics.

With such a sampling frame, of what population is it representative? The authors recognise that problem though don’t come up with an answer.

How measured?

Forecasters were assessed using Brier scores. I fear that Brier scores fail to be intuitive, are hard to understand and raise all sorts of conceptual difficulties. I don’t feel that they are sufficiently well explained, challenged or justified in the book. Suppose that a competitor predicts a probability p for the Italian default of 60%. Rewrite this as a probability in the range 0 to 1 for convenience, 0.6 If the competitor accepts finite additivity then the probability of “no default” is 1- 0.6 = 0.4. Now suppose that outcomes f are coded as 1 when confirmed and 0 when disconfirmed. That means that if a default occurs then f ( default ) = 1 and f (no default ) = 0. If there is no default then f ( default ) = 0 and f (no default ) = 1. It’s not easy to get. We then take the difference between the ps and the fs, calculate the square of the differences and sum them. This is illustrated below for “no default” which yields a Brier score of 0.72.

Event p f ( pf ) 2
Default 0.6 0 0.36
No default 0.4 1 0.36
Sum 1.0 0.72

Suppose we were dealing with a fair coin toss. Nobody would criticise a forecasting probability of 50% for heads and 50% for tails. The long run Brier score would be 0.5 (think about it). Brier scores were averaged for each competitor and used as the basis of ranking them. If a competitor updated a prediction then every fresh update was counted as an individual prediction and each prediction was scored. More on this later. An average of 0.5 would be similar to a chimp throwing darts at a target. That is about how well expert professional forecasters had performed in a previous study. The lower the score the better. Zero would be perfect foresight.

I would have liked to have seen some alternative analyses and I think that a Hosmer-Lemeshow statistic or detailed calibration study would in some ways have been more intuitive and yielded more insight.

What the results?

The results are not given in the book, only some anecdotes. Competitor Doug Lorch, a former IBM programmer it says, answered 104 questions in the first year and achieved a Brier score of 0.22. He was fifth on the drop list. The top 58 competitors, the superforecasters, had an average Brier score of 0.25 compared with 0.37 for the balance. In the second year, Lorch joined a team of other competitors identified as superforecasters and achieved an average Brier score of 0.14. He beat a prediction market of traders dealing in futures in the outcomes, the book says by 40% though it does not explain what that means.

I don’t think that I find any of that, in itself, persuasive. However, there is a limited amount of analysis here on the (old) Good Judgment Project website. Despite the reservations I have set out above there are some impressive results, in particular this chart.

The competitors’ Brier scores were measured over the first 25 questions. The 100 with the lowest scores were identified, the blue line. The chart then shows the performance of that same group of competitors over the subsequent 175 questions. Group membership is not updated. It is the same 100 competitors as performed best at the start who are plotted across the whole 200 questions. The red line shows the performance of the worst 100 competitors from the first 25 questions, again with the same cohort plotted for all 200 questions.

Unfortunately, it is not raw Brier scores that are plotted but standardised scores. The scores have been adjusted so that the mean is zero and standard deviation one. That actually adds nothing to the chart but obscures somewhat how it is interpreted. I think that violates all Shewhart’s rules of data presentation.

That said, over the first 25 questions the blue cohort outperform the red. Then that same superiority of performance is maintained over the subsequent 175 questions. We don’t know how much is the difference in performance because of the standardisation. However, the superiority of performance is obvious. If that is a mere artefact of the data then I am unable to see how. Despite the way that data is presented and my difficulties with Brier scores, I cannot think of any interpretation other than there being a cohort of superforecasters who were, in general, better at prediction than the rest.

Conclusions

Tetlock comes up with some tentative explanations as to the consistent performance of the best. In particular he notes that the superforecasters updated their predictions more frequently than the remainder. Each of those updates was counted as a fresh prediction. I wonder how much of the variation in Brier scores is accounted for by variation in the time of making the forecast? If superforecasters are simply more active than the rest, making lots of forecasts once the outcome is obvious then the result is not very surprising.

That may well not be the case as the book contends that superforecasters predicting 300 days in the future did better than the balance of competitors predicting 100 days. However, I do feel that the variation arising from the time a prediction was made needs to be taken out of the data so that the variation in, shall we call it, foresight can be visualised. The book is short on actual analysis and I would like to have seen more. Even in a popular management book.

The data on the website on purported improvements from training is less persuasive, a bit of an #executivetimeseries.

Some of the recommendations for being a superforecaster are familiar ideas from behavoural psychology. Be a fox not a hedgehog, don’t neglect base rates, be astute to the distinction between signal and noise, read widely and richly, etc..

Takeaways

There was one unanticipated and intriguing result. The superforecasters updated their predictions not only frequently but by fine degrees, perhaps from 61% to 62%. I think that some further analysis is required to show that that is not simply an artefact of the measurement. Because Brier scores have a squared term they would be expected to punish the variation in large adjustments.

However, taking the conclusion at face value, it has some important consequences for risk assessment which often proceeds by broadly granular ranking on a rating scale of 1 to 5, say. The study suggests that the best predictions will be those where careful attention is paid to fine gradations in probability.

Of course, continual updating of predictions is essential to even the most primitive risk management though honoured more often in the breach than the observance. I shall come back to the significance of this for risk management in a future post.

There is also an interesting discussion about making predictions in teams but I shall have to come back to that another time.

The amateurs out-performed the professionals on global politics. I wonder if the same result would have been encountered against experts in structural engineering.

And TalkTalk? They forgot, pace Stanley Baldwin, that the hacker will always get through.

Professor Tetlock invites you to join the programme at http://www.goodjudgment.com.

The Iron Law at Volkswagen

So Michael Horn, VW’s US CEO has made a “sincere apology” for what went on at VW.

And like so many “sincere apologies” he blamed somebody else. “My understanding is that it was a couple of software engineers who put these in.”

As an old automotive hand I have always been very proud of the industry. I have held it up as a model of efficiency, aesthetic aspiration, ambition, enlightenment and probity. My wife will tell you how many times I have responded to tales of workplace chaos with “It couldn’t happen in a car plant”. Fortunately we don’t own a VW but I still feel betrayed by this. Here’s why.

A known risk

Everybody knew from the infancy of emissions testing, which came along at about the same time as the adoption of engine management systems, the risks of a “cheat device”. It was obvious to all that engineers might be tempted to manoeuvre a recalcitrant engine through a challenging emissions test by writing software so as to detect test conditions and thereon modify performance.

In the better sort of motor company, engineers were left in no doubt that this was forbidden and the issue was heavily policed with code reviews and process surveillance.

This was not something that nobody saw coming, not a blind spot of risk identification.

The Iron Law

I wrote before about the Iron Law of Oligarchy. Decision taking executives in an organisation try not to pass information upwards. That will only result in interference and enquiry. Supervisory boards are well aware of this phenomenon because, during their own rise to the board, they themselves were the senior managers who constituted the oligarchy and who kept all the information to themselves. As I guessed last time I wrote, decisions like this don’t get taken at board level. They are taken out of the line of sight of the board.

Governance

So here we have a known risk. A threat that would likely not be detected in the usual run of line management. And it was of such a magnitude as would inflict hideous ruin on Volkswagen’s value, accrued over decades of hard built customer reputation. Volkswagen, an eminent manufacturer with huge resources, material, human and intellectual. What was the governance function to do?

Borrowing strength again

It would have been simple, actually simple, to secret shop the occasional vehicle and run it through an on-road emissions test. Any surprising discrepancy between the results and the regulatory tests would then have been a signal that the company was at risk and triggered further investigation. An important check on any data integrity is to compare it with cognate data collected by an independent route, data that shares borrowing strength.

Volkswagen’s governance function simply didn’t do the simple thing. Never have so many ISO 31000 manuals been printed in vain. Theirs were the pot odds of a jaywalker.

Knowledge

In the English breach of trust case of Baden, Delvaux and Lecuit v Société Générale [1983] BCLC 325, Mr Justice Peter Gibson identified five levels of knowledge that might implicate somebody in wrongdoing.

  • Actual knowledge.
  • Wilfully shutting one’s eyes to the obvious (Nelsonian knowledge).
  • Wilfully and recklessly failing to make such enquiries as an honest and reasonable man would make.
  • Knowledge of circumstances that would indicate the facts to an honest and reasonable man.
  • Knowledge of circumstances that would put an honest and reasonable man on enquiry.

I wonder where VW would place themselves in that.

How do you sound when you feel sorry?

… is the somewhat barbed rejoinder to an ungracious apology. Let me explain how to be sorry. There are three “R”s.

  • Remorse: Different from the “regret” that you got caught. A genuine internal emotional reaction. The public are good at spotting when emotions are genuine but it is best evidenced by the following two “R”s.
  • Reparation: Trying to undo the damage. VW will not have much choice about this as far as the motorists are concerned but the shareholders may be a different matter. I don’t think Horn’s director’s insurance will go very far.
  • Reform: This is the barycentre of repentance. Can VW now change the way it operates to adopt genuine governance and systematic risk management?

Mr Horn tells us that he has little control over what happens in his company. That is probably true. I trust that he will remember that at his next remuneration review. If there is one.

When they said, “Repent!”, I wonder what they meant.

Leonard Cohen
The Future

First thoughts on VW’s emmissions debacle

It is far too soon to tell exactly what went on at VW, in the wider motor industry, within the respective regulators and within governments. However, the way that the news has come out, and the financial and operational impact that it is likely to have, are enough to encourage all enterprises to revisit their risk management, governance and customer reputation management policies. Corporate scandals are not a new phenomenon, from the collapse of the Medici Bank in 1494, Warren Hastings’ alleged despotism in the British East India Company, down to the FIFA corruption allegations that broke earlier this year. Organisational scandals are as old as organisations. The bigger the organisations get, the bigger the scandals are going to be.

Normal Scandals

In 1984, Scott Perrow published his pessimistic analysis of what he saw as the inevitability of Normal Accidents in complex technologies. I am sure that there is a market for a book entitled Normal Scandals: Living with High-Risk Organisational Structures. But I don’t share Perrow’s pessimism. Life is getting safer. Let’s adopt the spirit of continual improvement to make investment safer too. That’s investment for those of us trying to accumulate a modest portfolio for retirement. Those who aspire to join the super rich will still have to take their chances.

I fully understand that organisations sometimes have to take existential risks to stay in business. The development of Rolls-Royce’s RB211 aero-engine well illustrates what happens when a manufacturer finds itself with proven technologies that are inadequately aligned with the Voice of the Customer. The market will not wait while the business catches up. There is time to develop a response but only if that solution works first time. In the case of Rolls-Royce it didn’t and insolvency followed. However, there was no alternative but to try.

What happened at VW? I just wonder whether the Iron Law of Oligarchy was at work. To imagine that a supervisory board sits around discussing the details of engine management software is naïve. In fact it was the RB211 crisis that condemned such signal failures of management to delegate. Do VW’s woes flow from a decision taken by a middle manager, or a blind eye turned, that escaped an inadequate system of governance? Perhaps a short term patch in anticipation of an ultimate solution?

Cardinal Newman’s contribution to governance theory

John Henry Newman learned about risk management the hard way. Newman was an English Anglican divine who converted to the Catholic Church in 1845. In 1850 Newman became involved in the controversy surrounding Giacinto Achilli, a priest expelled from the Catholic Church for rape and sexual assault but who was making a name from himself in England as a champion of the protestant evangelical cause. Conflict between Catholic and protestant was a significant feature of the nineteenth century English political landscape. Newman was minded to ensure that Achilli’s background was widely known. He took legal advice from counsel James Hope-Scott about the risks of a libel action from Achilli. Hope-Scott was reassuring and Newman published. The publication resulted in Newman’s prosecution and conviction for criminal libel.

Speculation about what legal advice VW have received as to their emissions strategy would be inappropriate. However, I trust that, if they imagined they were externalising any risk thereby, they checked the value of their legal advisors’ professional indemnity insurance.

Newman certainly seems to have learned his lesson and subsequently had much to teach the modern world about risk management and governance. After the Achilli trial Newman started work on his philosophical apologia, The Grammar of Assent. One argument in that book has had such an impact on modern thinking about evidence and probability that it was quoted in full by Bruno de Finetti in Volume 1 of his 1974 Theory of Probability.

Supposes a thesis (e.g. the guilt of an accused man) is supported by a great deal of circumstantial evidence of different forms, but in agreement with each other; then even if each piece of evidence is in itself insufficient to produce any strong belief, the thesis is decisively strengthened by their joint effect.

De Finetti set out the detailed mathematics and called this the Cardinal Newman principle. It is fundamental to the modern concept of borrowing strength.

The standard means of defeating governance are all well known to oligarchs, regulator capture, “stake-driving” – taking actions outside the oversight of governance that will not be undone without engaging the regulator in controversy, “whipsawing” – promising A that approval will be forthcoming from B while telling B that A has relied upon her anticipated, and surely “uncontroversial”, approval. There are plenty of others. Robert Caro’s biography The Power Broker: Robert Moses and the Fall of New York sets out the locus classicus.

Governance functions need to exploit the borrowing strength of diverse data sources to identify misreporting and misconduct. And continually improve how they do that. The answer is trenchant and candid criticism of historical data. That’s the only data you have. A rigorous system of goal deployment and mature use of process behaviour charts delivers a potent stimulus to reluctant data sharers.

Things and actions are what they are and the consequences of them will be what they will be: why then should we desire to be deceived?

Bishop Joseph Butler

 

FIFA and the Iron Law of Oligarchy

Йозеф Блаттер.jpgIn 1911, Robert Michels embarked on one of the earliest investigations into organisational culture. Michels was a pioneering sociologist, a student of Max Weber. In his book Political Parties he aggregated evidence about a range of trade unions and political groups, in particular the German Social Democratic Party.

He concluded that, as organisations become larger and more complex, a bureaucracy inevitably forms to take, co-ordinate and optimise decisions. It is the most straightforward way of creating alignment in decision making and unified direction of purpose and policy. Decision taking power ends up in the hands of a few bureaucrats and they increasingly use such power to further their own interests, isolating themselves from the rest of the organisation to protect their privilege. Michels called this the Iron Law of Oligarchy.

These are very difficult matters to capture quantitavely and Michels’ limited evidential sampling frame has more of the feel of anecdote than data. “Iron Law” surely takes the matter too far. However, when we look at the allegations concerning misconduct within FIFA it is tempting to feel that Michels’ theory is validated, or at least has gathered another anecdote to take the evidence base closer to data.

But beyond that, what Michels surely identifies is a danger that a bureaucracy, a management cadre, can successfully isolate itself from superior and inferior strata in an organisation, limiting the mobility of business data and fostering their own ease. The legitimate objectives of the organisation suffer.

Michels failed to identify a realistic solution, being seduced by the easy, but misguided, certainties of fascism. However, I think that a rigorous approach to the use of data can guard against some abuses without compromising human rights.

Oligarchs love traffic lights

I remember hearing the story of a CEO newly installed in a mature organisation. His direct reports had instituted a “traffic light” system to report status to the weekly management meeting. A green light meant all was well. An amber light meant that some intervention was needed. A red light signalled that threats to the company’s goals had emerged. At his first meeting, the CEO found that nearly all “lights” were green, with a few amber. The new CEO perceived an opportunity to assert his authority and show his analytical skills. He insisted that could not be so. There must be more problems and he demanded that the next meeting be an opportunity for honesty and confronting reality.

At the next meeting there was a kaleidoscope of red, amber and green “lights”. Of course, it turned out that the managers had flagged as red the things that were either actually fine or could be remedied quickly. They could then report green at the following meeting. Real career limiting problems were hidden behind green lights. The direct reports certainly didn’t want those exposed.

Openness and accountability

I’ve quoted Nobel laureate economist Kenneth Arrow before.

… a manager is an information channel of decidedly limited capacity.

Essays in the Theory of Risk-Bearing

Perhaps the fundamental problem of organisational design is how to enable communication of information so that:

  • Individual managers are not overloaded.
  • Confidence in the reliable satisfaction of process and organisational goals is shared.
  • Systemic shortfalls in process capability are transparent to the managers responsible, and their managers.
  • Leading indicators yield early warnings of threats to the system.
  • Agile responses to market opportunities are catalysed.
  • Governance functions can exploit the borrowing strength of diverse data sources to identify misreporting and misconduct.

All that requires using analytics to distinguish between signal and noise. Traffic lights offer a lousy system of intra-organisational analytics. Traffic light systems leave it up to the individual manager to decide what is “signal” and what “noise”. Nobel laureate psychologist Daniel Kahneman has studied how easily managers are confused and misled in subjective attempts to separate signal and noise. It is dangerous to think that What you see is all there is. Traffic lights offer a motley cloak to an oligarch wishing to shield his sphere of responsibility from scrutiny.

The answer is trenchant and candid criticism of historical data. That’s the only data you have. A rigorous system of goal deployment and mature use of process behaviour charts delivers a potent stimulus to reluctant data sharers. Process behaviour charts capture the development of process performance over time, for better or for worse. They challenge the current reality of performance through the Voice of the Customer. They capture a shared heuristic for characterising variation as signal or noise.

Individual managers may well prefer to interpret the chart with various competing narratives. The message of the data, the Voice of the Process, will not always be unambiguous. But collaborative sharing of data compels an organisation to address its structural and people issues. Shared data generation and investigation encourage an organisation to find practical ways of fostering team work, enabling problem solving and motivating participation. It is the data that can support the organic emergence of a shared organisational narrative that adds further value to the data and how it is used and developed. None of these organisational and people matters have generalised solutions but a proper focus on data drives an organisation to find practical strategies that work within their own context. And to test the effectiveness of those strategies.

Every week the press discloses allegations of hidden or fabricated assets, repudiated valuations, fraud, misfeasance, regulators blindsided, creative reporting, anti-competitive behaviour, abused human rights and freedoms.

Where a proper system of intra-organisational analytics is absent, you constantly have to ask yourself whether you have another FIFA on your hands. The FIFA allegations may be true or false but that they can be made surely betrays an absence of effective governance.

#oligarchslovetrafficlights

Does noise make you fat?

“A new study has unearthed some eye-opening facts about the effects of noise pollution on obesity,” proclaimed The Huffington Post recently in another piece or poorly uncritical data journalism.

Journalistic standards notwithstanding, in Exposure to traffic noise and markers of obesity (BMJ Occupational and environmental medicine, May 2015) Andrei Pyko and eight (sic) collaborators found “evidence of a link between traffic noise and metabolic outcomes, especially central obesity.” The particular conclusion picked up by the press was that each 5 dB increase in traffic noise could add 2 mm to the waistline.

Not trusting the press I decided I wanted to have a look at this research myself. I was fortunate that the paper was available for free download for a brief period after the press release. It took some finding though. The BMJ insists that you will now have to pay. I do find that objectionable as I see that the research was funded in part by the European Union. Us European citizens have all paid once. Why should we have to pay again?

On reading …

I was though shocked reading Pyko’s paper as the Huffington Post journalists obviously hadn’t. They state “Lack of sleep causes reduced energy levels, which can then lead to a more sedentary lifestyle and make residents less willing to exercise.” Pyko’s paper says no such thing. The researchers had, in particular, conditioned on level of exercise so that effect had been taken out. It cannot stand as an explanation of the results. Pyko’s narrative concerned noise-induced stress and cortisol production, not lack of exercise.

In any event, the paper is densely written and not at all easy to analyse and understand. I have tried to pick out the points that I found most bothering but first a statistics lesson.

Prediction 101

Frame(Almost) the first thing to learn in statistics is the relationship between population, frame and sample. We are concerned about the population. The frame is the enumerable and accessible set of things that approximate the population. The sample is a subset of the frame, selected in an economic, systematic and well characterised manner.

In Some Theory of Sampling (1950), W Edwards Deming drew a distinction between two broad types of statistical studies, enumerative and analytic.

  • Enumerative: Action will be taken on the frame.
  • Analytic: Action will be on the cause-system that produced the frame.

It is explicit in Pyko’s work that the sampling frame was metropolitan Stockholm, Sweden between the years 2002 and 2006. It was a cross-sectional study. I take it from the institutional funding that the study intended to advise policy makers as to future health interventions. Concern was beyond the population of Stockholm, or even Sweden. This was an analytic study. It aspired to draw generalised lessons about the causal mechanisms whereby traffic noise aggravated obesity so as to support future society-wide health improvement.

How representative was the frame of global urban areas stretching over future decades? I have not the knowledge to make a judgment. The issue is mentioned in the paper but, I think, with insufficient weight.

There are further issues as to the sampling from the frame. Data was taken from participants in a pre-existing study into diabetes that had itself specific criteria for recruitment. These are set out in the paper but intensify the questions of whether the sample is representative of the population of interest.

The study

The researchers chose three measures of obesity, waist circumference, waist-hip ratio and BMI. Each has been put forwards, from time to time, as a measure of health risk.

There were 5,075 individual participants in the study, a sample of 5,075 observations. The researchers performed both a linear regression simpliciter and a logistic regression. For want of time and space I am only going to comment on the former. It is the origin of the headline 2 mm per 5 dB claim.

The researchers have quoted p-values but they haven’t committed the worst of sins as they have shown the size of the effects with confidence intervals. It’s not surprising that they found so many soi-disant significant effects given the sample size.

However, there was little assistance in judging how much of the observed variation in obesity was down to traffic noise. I would have liked to see a good old fashioned analysis of variance table. I could then at least have had a go at comparing variation from the measurement process, traffic noise and other effects. I could also have calculated myself an adjusted R2.

Measurement Systems Analysis

Understanding variation from the measurement process is critical to any analysis. I have looked at the World Health Organisation’s definitive 2011 report on the effects of waist circumference on health. Such Measurement Systems Analysis as there is occurs at p7. They report a “technical error” (me neither) of 1.31 cm from intrameasurer error (I’m guessing repeatability) and 1.56 cm from intermeasurer error (I’m guessing reproducibility). They remark that “Even when the same protocol is used, there may be variability within and between measurers when more than one measurement is made.” They recommend further research but I have found none. There is no way of knowing from what is published by Pyko whether the reported effects are real or flow from confounding between traffic noise and intermeasurer variation.

When it comes to waist-hip ratio I presume that there are similar issues in measuring hip circumference. When the two dimensions are divided then the individual measurement uncertainties aggregate. More problems, not addressed.

Noise data

The key predictor of obesity was supposed to be noise. The noise data used were not in situ measurements in the participants’ respective homes. The road traffic noise data were themselves predicted from a mathematical model using “terrain data, ground surface, building height, traffic data, including 24 h yearly average traffic flow, diurnal distribution and speed limits, as well as information on noise barriers”. The model output provided 5 dB contours. The authors then applied some further ad hoc treatments to the data.

The authors recognise that there is likely to be some error in the actual noise levels, not least from the granularity. However, they then seem to assume that this is simply an errors in variables situation. That would do no more than (conservatively) bias any observed effect towards zero. However, it does seem to me that there is potential for much more structured systematic effects to be introduced here and I think this should have been explored further.

Model criticism

The authors state that they carried out a residuals analysis but they give no details and there are no charts, even in the supplementary material. I would like to have had a look myself as the residuals are actually the interesting bit. Residuals analysis is essential in establishing stability.

In fact, in the current study there is so much data that I would have expected the authors to have saved some of the data for cross-validation. That would have provided some powerful material for model criticism and validation.

Given that this is an analytic study these are all very serious failings. With nine researchers on the job I would have expected some effort on these matters and some attention from whoever was the statistical referee.

Results

Separate results are presented for road, rail and air traffic noise. Again, for brevity I am looking at the headline 2 mm / 5 dB quoted for road traffic noise. Now, waist circumference is dependent on gross body size. Men are bigger than women and have larger waists. Similarly, the tall are larger-waisted than the short. Pyko’s regression does not condition on height (as a gross characterisation of body size).

BMI is a factor that attempts to allow for body size. Pyko found no significant influence on BMI from road traffic noise.

Waist-hip ration is another parameter that attempts to allow for body size. It is often now cited as a better predictor of morbidity than BMI. That of course is irrelevant to the question of whether noise makes you fat. As far as I can tell from Pyko’s published results, a 5 dB increase in road traffic noise accounted for a 0.16 increase in waist-hip ratio. Now, let us look at this broadly. Consider a woman with waist circumference 85 cm, hip 100 cm, hence waist-hip ratio, 0.85. All pretty typical for the study. Predictively the study is suggesting that a 5 dB increase in road traffic noise might unremarkably take her waist-hip ratio up over 1.0. That seems barely consistent with the results from waist circumference alone where there would not only be millimetres of growth. It is incredible physically.

I must certainly have misunderstood what the waist-hip result means but I could find no elucidation in Pyko’s paper.

Policy

Research such as this has to be aimed at advising future interventions to control traffic noise in urban environments. Broadly speaking, 5 dB is a level of noise change that is noticeable to human hearing but no more. All the same, achieving such a reduction in an urban environment is something that requires considerable economic resources. Yet, taking the research at its highest, it only delivers 2 mm on the waistline.

I had many criticisms other than those above and I do not, in any event, consider this study adequate for making any prediction about a future intervention. Nothing in it makes me feel the subject deserves further study. Or that I need to avoid noise to stay slim.