Negative results are boring

A few weeks ago, The Economist ran an interesting series of articles chronicling “How science goes wrong.” While most of their points were – or should be – painfully obvious to academics, seeing it in writing has its merits, and helping to educate the general public on some of these issues is a laudable goal. You can read through the articles themselves here and here, but to summarize: a dreadful majority of published research findings are false. The few groups that have tried to replicate research findings – including (especially?) those published in major journals – have found that they are unable to do so in many cases.

The causes are multifold, and include dubious statistical knowledge and, in rare-cases, fraud. But the core of the issue is that false positives will arise as a natural consequence of the perverse incentives for publication and funding: namely, to publish positive results and to never waste time double-checking the results of others. After all, what does any individual researcher gain by showing that a published result can’t be replicated? Other than possibly exposing their own ignorance or lack of skills?

The consequences of this may not be dire, because a plausible argument can be made that science is progressing to some degree and that if a finding is false it will be uncovered as such sooner or later. But that thinking belies the fact that, in the mean time, a significant amount of money (often tax-payer funded) is being wasted as researchers pursue false leads, which ultimately slows down scientific progress. Thus, there is or should be a strong impetus to try to identify and solve these issues.

It is in this context that I wrote the following (unpublished) letter to the editor, with a few additions for the purpose of clarity in this blog post:

‘’’

You will find little disagreement among the scientific community on the urgent need to publish negative results, as well as results that may simply (in)validate previous research. Unfortunately, I’m also fairly sure that an infinitesimal few among that very same scientific community will care to read or cite the journal that publishes such findings, nor are they likely to respect their colleagues who make careers doing so. Were such a journal to be profitable and/or prestigious, my unabashed faith in the profit-motive, especially in the fiercely competitive publishing market, is such that it would likely exist already.

There are a few reasons why this journal doesn’t exist. Publishing negative results is particularly challenging because most results are negative. There are an infinite amount of false hypotheses (think: eating hamburgers on tuesdays reduces risk of spinal cord injuries in Spanish males aged 37-41), but very few true hypotheses. Is it more interesting that a novel drug does not slow the rate of Alzheimers progression, or that the same drug does slow the rate of tumor growth? For a negative result to be remotely interesting, it needs to be plausible, and plausibility is subject to interpretation. Moreover, for a negative result to be interesting, it also has to be performed well.

Even if there is a plausible hypothesis that a manuscript addresses, an infinite number of minor experimental perturbations will lead to negative results, but very few will lead to positive results. To go back to the previous example, suppose a drug does slow the rate of tumor progression, for instance. It is likely to do so only within a narrow concentration range, perhaps in conjunction with a second pharmacological treatment, and/or within a certain time window. Clearly, when you start accounting for all these possibilities, there are an enormous number of hypotheses than can be tested, and most of these hypotheses will yield a negative result. Yet, are those negative results interesting?

Even if we limit the conversation to negative results that refute previously published results, a plethora of minor protocol steps might explain why one lab sees a result while another is unable to verify the finding. Even if researchers choose to share these negative results, the “findings” will likely be published in a much lower quality journal (in the absence of any kind of fraud or outright error on the original authors’ behalf) and the burden will be on future readers to seek these refutations out. Journals could help to alleviate this issue by either publishing these negative results in the original journal or at least linking any disputed article directly to its detractors – but who is to say these detractors aren’t just bad at the experimental protocol?

Solving the problem of spurious and un-repeatable results will require creative solutions that few individuals have the incentives to pursue. This is a classic case of market failure in which solutions to must come from a position of authority such as funding agencies or publishers. Grants could include stipulations that all positive results funded by the NIH, for instance, must be replicated by an independent team of researchers. Why not? Or perhaps there should be specific grants/agencies/labs whose sole function is to fund and publish fact-checking research, otherwise: why would anyone care to do it? Of course, any of these proposed solutions will require a major shift in the way our complex funding bureaucracies work. I, for one, will not be holding my breath.

‘’’

—Adam Hockenberry