Get rich predicting the next recession – just watch the fertility statistics

… we are told. Or perhaps not. This was the research reported last week, with varying degrees of credulity, by the BBC here and The (London) Times here (£paywall). This turned out to be a press release about some academic research by Kasey Buckles of Notre Dame University and others. You have to pay USD 5 to get the academic paper. I shall come back to that.

The paper’s abstract claims as follows.

Many papers show that aggregate fertility is pro-cyclical over the business cycle. In this paper we do something else: using data on more than 100 million births and focusing on within-year changes in fertility, we show that for recent recessions in the United States, the growth rate for conceptions begins to fall several quarters prior to economic decline. Our findings suggest that fertility behavior is more forward-looking and sensitive to changes in short-run expectations about the economy than previously thought.

Now, here is a chart shared by the BBC.

Pregnancy and recession

The first thing to notice here is that we have exactly three observations. Three recession events with which to learn about any relationship between human sexual activity and macroeconomics. If you are the sort of person obsessed with “sample size”, and I know some of you are, ignore the misleading “100 million births” hold-out. Focus on the fact that n=3.

We are looking for a leading indicator, something capable of predicting a future event or outcome that we are bothered about. We need it to go up/ down before the up/ down event that we anticipate/ fear. Further it needs consistently to go up/ down in the right direction, by the right amount and in sufficient time for us to take action to correct, mitigate or exploit.

There is a similarity here to the hard and sustained thinking we have to do when we are looking for a causal relationship, though there is no claim to cause and effect here (c.f. the Bradford Hill guidelines). One of the most important factors in both is temporality. A leading indicator really needs to lead, and to lead in a regular way. Making predictions like, “There will be a recession some time in the next five years,” would be a shameless attempt to re-imagine the unsurprising as a signal novelty.

Having recognised the paucity of the data and the subtlety of identifying a usefully predictive effect, we move on to the chart. The chart above is pretty useless for the job at hand. Run charts with multiple variables are very weak tools for assessing association between factors, except in the most unambiguous cases. The chart broadly suggests some “association” between fertility and economic growth. It is possible to identify “big falls” both in fertility and growth and to persuade ourselves that the collapses in pregnancy statistics prefigure financial contraction. But the chart is not compelling evidence that one variable tracks the other reliably, even with a time lag. There looks like no evident global relationship between the variation in the two factors. There are big swings in each to which no corresponding event stands out in the other variable.

We have to go back and learn the elementary but universal lessons of simple linear regression. Remember that I told you that simple linear regression is the prototype of all successful statistical modelling and prediction work. We have to know whether we have a system that is sufficiently stable to be predictable. We have to know whether it is worth the effort. We have to understand the uncertainties in any prediction we make.

We do not have to go far to realise that the chart above cannot give a cogent answer to any of those. The exercise would, in any event, be a challenge with three observations. I am slightly resistant to spending GBP 3.63 to see the authors’ analysis. So I will reserve my judgment as to what the authors have actually done. I will stick to commenting on data journalism standards. However, I sense that the authors don’t claim to be able to predict economic growth simpliciter, just some discrete events. Certainly looking at the chart, it is not clear which of the many falls in fertility foreshadow financial and political crisis. With the myriad of factors available to define an “event”, it should not be too difficult, retrospectively, to define some fertility “signal” in the near term of the bull market and fit it astutely to the three data points.

As The Times, but not the BBC, reported:

However … the correlation between conception and recession is far from perfect. The study identified several periods when conceptions fell but the economy did not.

“It might be difficult in practice to determine whether a one-quarter drop in conceptions is really signalling a future downturn. However, this is also an issue with many commonly used economic indicators,” Professor Buckles told the Financial Times.

Think of it this way. There are, at most, three independent data points on your scatter plot. Really. And even then the “correlation … is far from perfect”.

And you have had the opportunity to optimise the time lag to maximise the “correlation”.

This is all probably what we suspected. What we really want is to see the authors put their money where their mouth is on this by wagering on the next recession, a point well made by Nassim Taleb’s new book Skin in the Game. What distinguishes a useful prediction is that the holder can use it to get the better of the crowd. And thinks the risks worth it.

As for the criticisms of economic forecasting generally, we get it. I would have thought though that the objective was to improve forecasting, not to satirise it.

Advertisement

Adoption statistics for England – signals of improvement?

I am adopted so I follow the politics of adoption fairly carefully. I was therefore interested to see this report on the BBC, claiming a “record” increase in adoptions. The quotation marks are the BBC’s. The usual meaning of such quotes is that the word “record” is not being used with its usual meaning. I note that the story was repeated in several newspapers this morning.

The UK government were claiming a 15% increase in children adopted from local authority care over the last year and the highest total since data had been collected on this basis starting in 1992.

Most people will, I think, recognise what Don Wheeler calls an executive time series. A comparison of two numbers ignoring any broader historical trends or context. Of course, any two consecutive numbers will be different. One will be greater than the other. Without the context that gives rise to the data, a comparison of two numbers is uninformative.

I decided to look at the data myself by following the BBC link to the GOV.UK website. I found a spreadsheet there but only with data from 2009 to 2013. I dug around a little more and managed to find 2006 to 2008. However, the website told me that to find any earlier data I would have to consult the National Archives. At the same time it told me that the search function at the National Archives did not work. I ended up browsing 30 web pages of Department of Education documents and managed to get figures back to 2004. However, when I tried to browse back beyond documents dated January 2008, I got “Sorry, the page you were looking for can’t be found” and an invitation to use the search facility. Needless to say, I failed to find the missing data back to 1992, there or on the Office for National Statistics website. It could just be my internet search skills that are wanting but I spent an hour or so on this.

Gladly, Justin Ushie and Julie Glenndenning from the Department for Education were able to help me and provided much of the missing data. Many thanks to them both. Unfortunately, even they could not find the data for 1992 and 1993.

Here is the run chart.

Adoption1

Some caution is needed in interpreting this chart because there is clearly some substantial serial correlation in the annual data. That said, I am not able to quite persuade myself that the 2013 figure represents a signal. Things look much better than the mid-1990s but 2013 still looks consistent with a system that has been stable since the early years of the century.

The mid 1990s is a long time ago so I also wanted to look at adoptions as a percentage of children in care. I don’t think that that is automatically a better measure but I wanted to check that it didn’t yield a different picture.

Adoption2

That confirms the improvement since the mid-1990s but the 2013 figures now look even less remarkable against the experience base of the rest of the 21st century.

I would like to see these charts with all the interventions and policy changes of respective governments marked. That would then properly set the data in context and assist interpretation. There would be an opportunity to build a narrative, add natural process limits and come to a firmer view about whether there was a signal. Sadly, I have not found an easy way of building a chronology of intervention from government publications.

Anyone holding themselves out as having made an improvement must bring forward the whole of the relevant context for the data. That means plotting data over time and flagging background events. It is only then that the decision maker, or citizen, can make a proper assessment of whether there has been an improvement. The simple chart of data against time, even without natural process limits, is immensely richer than a comparison of two selected numbers.

Properly capturing context is the essence of data visualization and the beginnings of graphical excellence.

One my favourite slogans:

In God we trust. All else bring data.

W Edwards Deming

I plan to come back to this data in 2014.

The graph of doom – one year on

I recently came across the chart (sic) below on this web site.

GraphofDoom

It’s apparently called the “graph of doom”. It first came to public attention in May 2012 in the UK newspaper The Guardian. It purports to show how the London Borough of Barnet’s spending on social services will overtake the Borough’s total budget some time around 2022.

At first sight the chart doesn’t offend too much against the principles of graphical excellence as set down by Edward Tufte in his book The Visual Display of Quantitative Information. The bars could probably have been better replaced by lines and that would have saved some expensive, coloured non-data ink. That is a small quibble.

The most puzzling thing about the chart is that it shows very little data. I presume that the figures for 2010/11 are actuals. The 2011/12 may be provisional. But the rest of the area of the chart shows predictions. There is a lot of ink on this chart showing predictions and very little showing actual data. Further, the chart does not distinguish, graphically, between actual data and predictions. I worry that that might lend the dramatic picture more authority that it is really entitled to. The visible trend lies wholly in the predictions.

Some past history would have exposed variation in both funding and spending and enabled the viewer to set the predictions in that historical context. A chart showing a converging trend of historical data projected into the future is more impressive than a chart showing historical stability with all the convergence found in the future prediction. This chart does not tell us which is the actual picture.

Further, I suspect that this is not the first time the author had made a prediction of future funds or demand. What would interest me, were I in the position of decision maker, is some history of how those predictions have performed in the past.

We are now more than one year on from the original chart and I trust that the 2012/13 data is now available. Perhaps the authors have produced an updated chart but it has not made its way onto the internet.

The chart shows hardly any historical data. Such data would have been useful to a decision maker. The ink devoted to predictions could have been saved. All that was really needed was to say that spending was projected to exceed total income around 2022. Some attempt at quantifying the uncertainty in that prediction would also have been useful.

Graphical representations of data carry a potent authority. Unfortunately, when on the receiving end of most Powerpoint presentations we don’t have long to deconstruct them. We invest a lot of trust in the author of a chart that it can be taken at face value. That ought to be the chart’s function, to communicate the information in the data efficiently and as dramatically as the data and its context justifies.

I think that the following principles can usefully apply to the charting of predictions and forecasts.

  • Use ink on data rather than speculation.
  • Ditto for chart space.
  • Chart predictions using a distinctive colour or symbol so as to be less prominent than measured data.
  • Use historical data to set predictions in context.
  • Update chart as soon as predictions become data.
  • Ensure everybody who got the original chart gets the updated chart.
  • Leave the prediction on the updated chart.

The last point is what really sets predictions in context.

Note: I have tagged this post “Data visualization”, adopting the US spelling which I feel has become standard English.