Managing a railway on historical data is like …

I was recently looking on the web for any news on the Galicia rail crash. I didn’t find anything current but came across this old item from The Guardian (London). It mentioned in passing that consortia tendering for a new high speed railway in Brazil were excluded if they had been involved in the operation of a high speed line that had had an accident in the previous five years.

Well, I don’t think that there is necessarily anything wrong with that in itself. But it is important to remember that a rail accident is not necessarily a Signal (sic). Rail accidents worldwide are often a manifestation of what W Edwards Deming called A stable system of trouble. In other words, a system that features only Noise but which cannot deliver the desired performance. An accident free record of five years is a fine thing but there is nothing about a stable system of trouble that says it can’t have long incident free periods.

In order to turn that incident free five years into evidence about future likely safety performance we also need hard evidence, statistical and qualitative, about the stability and predictability of the rail operator’s processes. Procurement managers are often much worse at looking for, and at, this sort of data. In highly sophisticated industries such as automotive it is routine to demand capability data and evidence of process surveillance from a potential supplier. Without that, past performance is of no value whatever in predicting future results.

Rearview

Advertisement

Galicia rail crash – human error I

I am closely following developments arising from the Galicia rail crash in Spain on 24 July 2013. I worked on the risk analysis of some of the early UK Automatic Train Protection (ATP) systems back in the 1980s. I want to see how this all turns out.

I noted a couple of days ago some wise words from the investigating judge.

Human error is seldom the root cause of a failure. As the judge observed, human error is entirely to be expected. It is part of our common knowledge of how people perform. They make mistakes. If human error is expected it should be anticipated by designing a system that is robust against known human fallibility. It is the system designers who are responsible for harnessing professional engineering knowledge and expertise to protect against the known.

I await further developments with interest.