Over at the Scholarly Kitchen, everyone’s favourite source material for winding up OA advocates, Phil Davis asked about something only tangentially related: Do Uninteresting Papers Really Need Peer Review?
In it he lays out a view that is perhaps selfish, but understandably so. He outlines why he only agrees to review few papers, and what sort he will review:
For me to accept an invitation to review, a paper has to report novel and interesting results. If it has been circulated as a preprint on arXiv, then I don’t benefit from seeing it a second time as a reviewer. Similarly, the paper must also pique my interest in some way. Reviewing a paper that is reporting well-known facts (like documenting the growth of open access journals, for instance) is just plain boring. Test a new hypothesis, apply a new analytical technique to old data, or connect two disparate fields, and you’ll get my attention and my time.
The only other category of manuscripts that I’ll accept for review are those that are so biased or fatally flawed that it would be a disservice to the journal or to the community to allow them to be published. These papers must really have the potential to do harm (by distorting the literature or making a mockery of the journal) for me to review them.
Which; I guess, means he’s only interested in reviewing a small percentage of papers that come his way. As a journal editor, I find this attitude worrying, but as a potential reviewer, I understand it perfectly: there is only so much time in the day, so I don’t really want to spend it reading boring manuscripts (a plea: if you find yourself wanting to not review a manuscript, please suggest another victim, e.g. a post-doc or senior grad student. That really helps me, and whoever gets to review gets experience).
Davis’ solution to the problem is to suggest that a boring paper doesn’t need to be reviewed fully:
Perhaps all that is needed is to send null and confirmational results through perfunctory editorial review. These articles may only require passing a checklist of required elements before being published in a timely fashion. The result may be a cheaper and faster route to publication, and for some kinds of publications, this is exactly the desired outcome.
As an editor, my reaction to this is “eek”. In the comments, Kent Anderson lays out an important complication:
The purpose of peer review is both to improve the paper and to help it find the right outlet.
…
But let’s not kid ourselves — not every paper needs to be peer reviewed as rigorously as some, and there is no single thing called “peer review.”
Finding the right outlet is something that editors can certainly help with through a perfunctory review. At Methods in Ecology & Evolution, we receive quite a few papers that are not suited to us, and I try to suggest alternative outlets. But improving a paper is something that often needs more time: someone who knows the areas has to read a paper more carefully, and look for areas where it can be improved. As an editor, I often don’t have enough knowledge about these areas, which is why we ask reviewers.
But I am also sympathetic towards reviewers who agree to review a paper that turns out to be boring papers. The sorts of papers Davis is discussing are ones that are boring because they replicate other work, without adding much novel. So they are a priori identifiable as not terribly exciting, but worthy, and this should influence the choice of which journal the paper is sent to. In particular, somewhere like the Journal of Negative Results – Ecology & Evolutionary Biology might be a typical choice. But given that we already know such a paper is going to be difficult to find reviewers, and is also not going to have a huge impact, can we not try to make them more palatable for readers, without compromising on the reporting of the work? My feeling is soo 2008: Yes, we can.
Most papers we receive at JNR are written like standard papers: an introduction that lays out the problem and the literature, the methods describing a sanitised version of what was done, the results, and finally a discussion explaining why this work is so important that Nature should have accepted it at once. My feeling is that we could improve the readability of papers if we cut down the introduction and discussion massively. This is especially so when the paper is a replication: the intro could be pretty much “We repeat the seminarsemolina work of Gee & Grant, but in the Greater Rumple Horned Snorkack. Read their paper for why this is so interesting”. And the discussion could be similarly brief: “We found similar results, but with a stronger effect of Ewok urine. This might be because of the smaller size of the Greater Rumple Horned Snorkack.” There is probably no need to explain why this is important for global warming: if it was that important, either someone else will have already said it, or you would be publishing the work in a journal with a higher impact factor, like PLOS One.
Would this help? I guess if we told potential reviewers the word count, it might. It also might help to get more negative results published, by making the barrier to writing them up lower. Would losing the text harm science? The only harm I can see is that students writing up negative results might not learn to put their work in context, but one would hope that not all of thesis is like this, and if that was seen as a problem, extra discussion could be added to the thesis rather than the paper.
Is this something we should try at JNR-EEB? And for those of you not in ecology or evolutionary biology, does this sound familiar, or is the tedium of the discussion section one only we face?
Yes please. Introductions are so rarely useful. Link to a wikipedia page would generally do. Maybe combine discussion and introduction at the end. This is what we did, but this is where that fits in the previous literature.
As an editor it seems you could lead the way in this.
I’ll have to discuss this. I should say that introductions can sometimes be useful for me, if they lay out the problem clearly and concisely. I suspect for most papers 3 or 4 paragraphs are enough to do this.
Hm, I wonder if anyone has looked to see how often something said in the introduction or discussion is used later.
Evidently O’Hara’s manuscript was not copyedited, let alone peer reviewed: ours was a ‘seminal’, not ‘seminar’ paper.
Or perhaps ‘Seminole’.
Ah, thanks. Corrected. Well, just about.
More tapioca, I’d have thought.
Well, this is why we Editors at Your Favourite Weekly Professional Science Magazine Beginning With N exist.
The party line is that we add value – helping good papers become better, steering them through peer review, so that when the papers get published they have more explicit and hopefully reproducible methods, better controls and so on and so forth, which they didn’t always have when they came in.
But perhaps one of our most important jobs – certainly one that takes up most of the time – is that we reject approximately four out of every five papers without sending them to peer review, so that the papers we do send to review are likely to pique a referee’s interest. Very few of the papers we reject outright are bad, as such – but most of them corroborate something already known, or present parochial results which, while interesting, cannot be seen to contribute anything much that’s of fundamental importance. These are the papers which belong, quite rightly, in the specialist literature, or in fora such as PLOS or our own Scientific Reports, where the people who really want the latest about the Crumple Horned Snorkack can find them.
How do you do that fairly but efficiently? At MEE I reject about 50% without review, and I feel obliged to read all of the manuscripts (albeit not always as carefully as if I was reviewing).
I guess the other problem you have is telling people what is and isn’t worth submitting. I guess that’s even worse than for us at MEE, because (almost) everyone wants to get something published in Nature or Science.
All I can say is that we do our best. If we aren’t sure, we confer, we argue, we even – gasp – change our minds. We read the manuscripts. We look up the relevant literature. Yes, it takes time. Which is why we need quite a lot of full time editors to cope with the 11,000 or so submissions we receive each year.
Can’t see at all how “If it has been circulated as a preprint on arXiv, then I don’t benefit from seeing it a second time as a reviewer.” is a problem – as if anyone had the time to read every paper on the arXiv, and even if I had read it already, I’d actually be more willing to do the review because it will cost me less time.
I agree that introductions could often be shorter (in other fields they are), but not sure whether I would be more willing to review papers without an introduction, a (short) introduction helps to understand the motivation for the study; typically, as a reviewer, I’m not spending a lot of time on this part of the paper anyway. I think it would save time though to concentrate the review process more on the scientific content and not so much on the presentation.
Ceterum censeo: it would be a big advance to make the reviews and responses public!
Also, if it’s been circulated on arXiv, hopefully the community there will have recommended improvements, so the journal submission may be different (and better).
I found that arXiv comment odd to. It assumes that there is sufficient peer review. my guess is that it is patchy, at best.
Wouldn’t be the first time a KT contributor confused the humorous and the humerus.
Ewoks don’t produce urine. They do produce an oily, sweet smelling scat though.
Less importantly,
(1) I don’t want to have to interrupt my reading flow to find how the authors actually mangled testing a hypothesis through their experimental design, by having to go somewhere else to read the Methods.
(2) Intros could easily be shorter, but this hinders entry into a field by a new reader. The right balance is hard to find.
(3) I still think we should be making it harder to publish, not easier. There’s already too much literature out there to keep up with it.
We can discuss Ewok excretory mechanisms later, but…
(1) I largely agree, although there is scope for briefly summarising the methods, and giving full details elsewhere. But YMMV. If someone is just replicating what everyone else did, you probably want to read Else, E. anyway.
(2) Should a new reader really rely on one intro? Would it be better to simply cite a review that will act as entry-level material?
(3) Ah well, with boring stuff you don’t have to read it, just wait for the meta-analysis. 🙂
Yeah, I could be swayed on the Intro issue. I suppose the real need for Introductory scene setting comes when setting up paradigm shifting results, which don’t come along very often. I’m currently reviewing an article that points out early that the subject has recently been reviewed to death, but then goes on to summarise the history of the field, ‘briefly’, over 2 pages anyway. That just seems rude 😉
Now, about those scats, man…
Quite off-topic, having helped to compile data for a meta-analyis during the last 1.5 years, I feel that more people really should actually *read* more papers. I mean, really read them. We found a whole bunch of papers citing other papers which turned out to say something different than the paper citing them.
And, while I’m at complaining, we found another heavy workload of papers which didn’t even give basic statistics. Sometimes, even the N was missing, which pissed me of so much I nearly wanted to put up a tumblr with quotes from papers which fail to report, well, basic statistics. But then, you don’t do something like this, because it pisses off people who already published more than you did, right?
When I’ve been involved in meta-analyses, someone else has collected the data, but I still had to deal with the missing data in the analyses. I agree, very annoying.
Can’t you set up tumblr pseudonymously? 🙂
Yes we can. But then I’d be even more worried to piss people off. And it’s easy to link it back to the meta-analysis, I guess. All the papers would have something to to with it’s subject, and a lot will still get cited there. It’s not even my work, so I would do harm to the authors. I’m just part of the academic plankton helping to process the data.
(1) I don’t want to have to interrupt my reading flow to find how the authors actually mangled testing a hypothesis through their experimental design, by having to go somewhere else to read the Methods.
This.