If it ain’t broke, go fix it anyway. Experimental fudge

JR Max Wheel & Graham Reid

5 November 2013

“The most dangerous untruths are truths moderately distorted.” – Lichtenberg

It is surely no accident that two respected journals have published editorials on flawed methodologies in science last month; the Economist (19/25th.October) and New Scientist (19th Oct 2013). One wonders what took them so long as the original paper to which both refer was originally published in 2005 by eminent US epidemiologist Prof. John Ionannidis.

They highlight that research is increasingly driven by finding topics on which funding is available, rather than vice versa, a deeply worrying trend.

He specifically took issue with biotech research and in particular neuroscience. No matter.  What is pertinent is that it has laid bare some core issues, that experimental results are frequently not capable of replication, that (some) researchers do not have good knowledge of appropriate statistical techniques and that results are accepted willy- nilly, and despite peer reviews.

Needless to say neither journal thought to extend their probe into areas where flawed research and modelling has proved to be spectacularly wrong, economic quantitative analysis (a subject which we will analyse in more detail in a separate post) and climate change, where at best, results are hotly disputed and not as most mouthpieces would have us believe, a happy consensus of settled science.

That both areas have vital implications for Government policy decisions makes this a matter of urgency.

Scientific methodology goes back at least to the Greeks and in its refined form, from roughly the 17th.C. It should consist of formulation, hypothesis, prediction, testing and analysis, so what has gone wrong?

If we follow the Economist’s analysis it has become a mix of careerism, a rise in researchers chasing funds, poor techniques and a highly questionable peer review process, more interested in promoting a cause or seeking more grants than examining results.

Yet testing is a fundamental part of the scientific process. If a leading biotech company can only repeat 6 out of 53 landmark papers and a major pharmaceutical company barely a quarter of 67 important papers, something is seriously awry. Scientists will make mistakes and as mistakes are vital to the understanding process, quality review is surely essential.

There is a less comfortable viewpoint which seems especially marked in contentious areas like climate change, deliberate avoidance, misrepresentation or manipulation of “inconvenient” data. Here the subject matter takes on a quasi religious tone and a selective deafness towards anyone questioning the mythical consensus. This is also deeply political and hence has strayed right outside the boundaries of normal scientific discourse, and so divided is opinion between “deniers” and “warmists”, that meaningful dialogue has gone out of the window.

The recent IPCC report (AR5), whose key summary is meant for policy makers, so it’s going to affect all of us is now 95% certain of its results of man-made global warming, up from 90%! Thankfully for the rest of us, Douglas Keenan has both looked at the methods and the forecasts with a keen statistician’s mind in his draft analysis.

He has concluded that the modelling of the time series data is once again deeply flawed. There are known issues about the non-linearity of climate readings over long time scales and yet the IPCC, whilst recognizing its model’s inconsistencies has no hesitation in drawing unequivocal conclusions from it.

Firstly the model is statistically inappropriate; secondly it admits that the model predictions is not outside the possibilities of natural climate variability, thirdly the report admits that, in essence, we do not understand the data well enough to choose a model. The UK Met. Office hardly comes out of this with any credit either, despite repeated Parliamentary questions about the model’s suitability and the reliability of its results, there is a deafening silence. Many errors persist from the time of the first assessment (AR1)

So, we have a policy recommendation based on a dodgy model, where the data is poorly understood and cannot be held to be statistically significant, in short we have no idea whether the Earth is warming, cooling or staying the same. If this were not bad enough the science community has closed ranks and refused to contemplate or engage with critical thought.

This is just the statistical tip of the iceberg with no consideration of alternative explanations of effects of changes to the sun’s cycles or a host of alternative viewpoints on causation of climate change.

We are grateful to the Bishop Hill blog for Keenan’s analysis http://bishophill.squarespace.com/

We started this article with Prof. Ioannidis concerns to show that the existing methodologies were flawed, short-cut or simply wrong – false positives or false negatives and in particular in bioscience. He is right to be concerned, because it is highly likely that the same issues do occur in other fields of study, in so doing he has done an important service.

The IPCC makes much of its credentials as the expert mouthpiece of the United Nations on climate, we have a right to expect more quality analysis and they have a duty to provide it. Maybe the research gravy train has simply proved irresistible.

This is not science, it is opinion and embedded in virtually every nature programme and promulgated by every environment correspondent in the national media and the BBC. So prevalent has this become that questioners are treated as hostile, so it is now dogma. This is why reform of science is critical.