Over at the Scholarly Kitchen, everyone’s favourite source material for winding up OA advocates, Phil Davis asked about something only tangentially related: Do Uninteresting Papers Really Need Peer Review?
In it he lays out a view that is perhaps selfish, but understandably so. He outlines why he only agrees to review few papers, and what sort he will review:
For me to accept an invitation to review, a paper has to report novel and interesting results. If it has been circulated as a preprint on arXiv, then I don’t benefit from seeing it a second time as a reviewer. Similarly, the paper must also pique my interest in some way. Reviewing a paper that is reporting well-known facts (like documenting the growth of open access journals, for instance) is just plain boring. Test a new hypothesis, apply a new analytical technique to old data, or connect two disparate fields, and you’ll get my attention and my time.
The only other category of manuscripts that I’ll accept for review are those that are so biased or fatally flawed that it would be a disservice to the journal or to the community to allow them to be published. These papers must really have the potential to do harm (by distorting the literature or making a mockery of the journal) for me to review them.
Which; I guess, means he’s only interested in reviewing a small percentage of papers that come his way. As a journal editor, I find this attitude worrying, but as a potential reviewer, I understand it perfectly: there is only so much time in the day, so I don’t really want to spend it reading boring manuscripts (a plea: if you find yourself wanting to not review a manuscript, please suggest another victim, e.g. a post-doc or senior grad student. That really helps me, and whoever gets to review gets experience).
Davis’ solution to the problem is to suggest that a boring paper doesn’t need to be reviewed fully:
Perhaps all that is needed is to send null and confirmational results through perfunctory editorial review. These articles may only require passing a checklist of required elements before being published in a timely fashion. The result may be a cheaper and faster route to publication, and for some kinds of publications, this is exactly the desired outcome.
As an editor, my reaction to this is “eek”. In the comments, Kent Anderson lays out an important complication:
The purpose of peer review is both to improve the paper and to help it find the right outlet.
…
But let’s not kid ourselves — not every paper needs to be peer reviewed as rigorously as some, and there is no single thing called “peer review.”
Finding the right outlet is something that editors can certainly help with through a perfunctory review. At Methods in Ecology & Evolution, we receive quite a few papers that are not suited to us, and I try to suggest alternative outlets. But improving a paper is something that often needs more time: someone who knows the areas has to read a paper more carefully, and look for areas where it can be improved. As an editor, I often don’t have enough knowledge about these areas, which is why we ask reviewers.
But I am also sympathetic towards reviewers who agree to review a paper that turns out to be boring papers. The sorts of papers Davis is discussing are ones that are boring because they replicate other work, without adding much novel. So they are a priori identifiable as not terribly exciting, but worthy, and this should influence the choice of which journal the paper is sent to. In particular, somewhere like the Journal of Negative Results – Ecology & Evolutionary Biology might be a typical choice. But given that we already know such a paper is going to be difficult to find reviewers, and is also not going to have a huge impact, can we not try to make them more palatable for readers, without compromising on the reporting of the work? My feeling is soo 2008: Yes, we can.
Most papers we receive at JNR are written like standard papers: an introduction that lays out the problem and the literature, the methods describing a sanitised version of what was done, the results, and finally a discussion explaining why this work is so important that Nature should have accepted it at once. My feeling is that we could improve the readability of papers if we cut down the introduction and discussion massively. This is especially so when the paper is a replication: the intro could be pretty much “We repeat the seminarsemolina work of Gee & Grant, but in the Greater Rumple Horned Snorkack. Read their paper for why this is so interesting”. And the discussion could be similarly brief: “We found similar results, but with a stronger effect of Ewok urine. This might be because of the smaller size of the Greater Rumple Horned Snorkack.” There is probably no need to explain why this is important for global warming: if it was that important, either someone else will have already said it, or you would be publishing the work in a journal with a higher impact factor, like PLOS One.
Would this help? I guess if we told potential reviewers the word count, it might. It also might help to get more negative results published, by making the barrier to writing them up lower. Would losing the text harm science? The only harm I can see is that students writing up negative results might not learn to put their work in context, but one would hope that not all of thesis is like this, and if that was seen as a problem, extra discussion could be added to the thesis rather than the paper.
Is this something we should try at JNR-EEB? And for those of you not in ecology or evolutionary biology, does this sound familiar, or is the tedium of the discussion section one only we face?