Publication bias is the tendency to report positive results differently from negative or inconclusive results, resulting in a bias in the overall literature (see Wikipedia article and this tutorial at the Cochrane Collaboration). Afficionados of evidence-based practice and meta-analysers of research worry that such bias makes it hard to accurately interpret the literature (though there is bias and bias). In the clinical field it was hoped that registration of clinical trials would improve the situation, creating more pressure to publish results of all trials. Nature reported in 2009 that
“Fewer than half of published trials are adequately registered, and, on the other hand, fewer than half of registered trials are ever published in peer-reviewed journals”.
In a letter to Nature last year Bob O’Hara pointed out that a number of journals do publish negative results, listing the following as titles devoted to or including negative results:
- Journal of Negative Results in Biomedicine
- Journal of Negative Results — Ecology and Evolutionary Biology
- Journal of Articles in Support of the Null Hypothesis
- Journal of Universal Computer Sciences
- PLoS ONE
Incentives for publishing negative or inconclusive results are lacking though.
Now a new publication, the Journal of Errology aims to help change this. They say
“apart from sharing successful results and data, it is also important for researchers to share the experiences learned via their trials and tribulations. We can say with certainty that nothing till now has not been discovered or invented without its fair share of failures, mistakes, errors and problems.”
So far the journal seems to be empty. Not just empty of articles but also empty of editors and reviewers. Even the FAQ page is empty. Still, it is an interesting idea and an arresting title – Errology. I wonder how long it will be before we see a Professor of Errology?
If this were the first of April, I suspect you’re having us on.
Part of publication bias must be attributed to the audience, surely? We all like reading about things that are novel, that worked, and that yielded beautiful results in an elegant way. Journal of Negative Results in Biomedicine just doesn’t seem like a journal that would capture my interest. Yes, I’m part of the problem, I suppose.
I would make a great professor of Errology. I definitely practice what I perch.
Bob – Your designation would be a bit unpronounceable. “O’Hara, Errology”.
That be so appropriate.
I think it’s a great idea. Since grant money is used to do all experiments, not just the ones that worked, all results that come out of the funded project whether positive or negative should be published somewhere. This is very important because in some fields many people have the same ideas and there is no point for many labs to waste their time and realize that the experiment is not going to give the results they expect. A few years ago, we had this idea that every published paper should have an online attachment, which gives the details of all the false starts in the project and every experiment that did not work. This would be a great place to search when you have a grant idea, you could check if someone had already thought about a similar experiment and try to find out why it didn’t work in their hands.
RIchard – I wish I had thought of that! It would have made a great April 1 double bluff.
As for the audience, I don’t imagine that anyone would eagerly read through each issue to look for non-results in their field, but if you are searching for the effect of X on Y and a few articles about the absence of effect of X on Y, then I think you probably would be interested in them.
MGG – I agree it’s a good idea. Perhaps your new “failures section” should become a standard feature of articles, though sometimes presumably the failures happen at an early stage so that no publication results from that strand of work.
I think the real problem is motivation and the time required to write up something that will not contribute much to your reputation. “He’s got a great series of papers showing no results…He really closed down a whole field of study”.
I think in reality it will not happen on a large scale. If there were to be a gradual shift to Open Notebook Science, where experiments become open by default at some stage, that would achieve something like the same result with less effort. But I think that would take a decade or two.
Frank — Thank you for the article on our new Journal. We launched just about two weeks ago and are still in beta stages and hence no articles nor editors. Meanwhile we are looking for editors, reviewers and even suggestions.
MGG — Thank you for your supportive comments. Helps a lot. I would like to hear more from you. Please mail me at mahboob [at] bioflukes.com .
Frank if you have any more questions, please feel free to contact me. I would be more than happy to answer any….
Mahboob – thanks for visiting and commenting. I wish your journal well; I appreciate it takes time to get something so novel established. I am all for experimentation in publishing and diversity of outlets.
An alternative is to publish your data in FigShare (there might be alternatives I am not aware of). It is not a journal, but it means your data is available for re-use, and backed by NPG so it won’t go ‘poof’ after a couple of years. And who knows, maybe by combining negative data from several studies you could actually discover something novel!
Thanks for that Nico. I had seen that Figshare had a bit of a relaunch recently, but I confess I still had it pigeonholed in my mind as a place just for sharing images or figures, rather than data more broadly.
From the FAQ I note
I must look more closely at Figshare.
I also understand that the authors of some data can then add to that data, all the while keeping the same handle, which is quite nifty. Data can also be grouped, so you could have a table with the raw numbers, a figure showing say a graph based on that data, and have them both linked. If I was still researching I would be using this!
Just spotted Ivan Oransky has a post up on Retraction Watch about the Journal of Errology, with a few more searching questions.
On Twitter he also mentioned the Journal of Unsolved Questions (JUnQ), which aims to “gather ‘null’-result research and open problems”.