A couple of years ago I blogged about a new journal, Ideas in Ecology and Evolution, and it’s experiments in the reviewing process. I was sceptical then, but happy to be shown wrong: I think we need these experiments to find out how . For this year, Ideas in Ecology and Evolution (IEE) is starting a new experiment, and this time they seem to have jumped the shark.
The experiment is announced in a new IEE editorial. Briefly, they propose the following process:
- The author solicits his/her own referees, and can pay them for their trouble.
- If revisions are suggested, the author does them and asks the reviewers to re-evaluate the manuscript.
- Once the author has two favourable reviews, they sends the manuscript, with the reviews, to the journal,
- The journal check with the reviewers that they have reviewed the submitted manuscript, and can solicit more reviews before reaching a decision on acceptance.
- If the journal accepts the paper, the referees’ names are disclosed in the published version.
- If the journal rejects the paper, it can be submitted to another journal, with the same reviews.
There are two innovations here: the author (not the editor) solicits reviews, and reviews can be re-used. The latter, recycling reviews, looks generally like a good idea – it’s not perfect but I’m sure we’d live with the problems. But it’s only going to work if journals will accept reviews from their competitors, and this is where the new IEE policy breaks down. Firstly because they’re the only journal doing this, and secondly because it’s unlikely any other journal would accept these reviews.
The problem is simply the idea that authors can invite who they want to review, and chose which reviews to send to the journal. And they can pay for the review! has whoever thought up that idea ever heard the phrase “conflict of interest”?
The first problem is that, despite the protestations of the IEE editors, it’s difficult to see how letting authors solicit their reviews will help. the editors think that it will lead to a greater incentive to reviewers:
Although the [IEE’s] model can work in theory without any requirement that referees be paid, paying referees – combined with published referee acknowledgement, plus opportunity for referees to post/publish their commentary on the reviewed paper (in the event that it is published) – provides the important principal advantage of referee incentive.
and thus to a greater overall quality:
Paid service combined with published referee acknowledgement, and opportunity for referees to post/publish their commentaries on reviewed papers, would not only minimize referee bias and promote greater referee accountability, but would also engage referees more directly in the mission for discovery that the manuscript represents. Any worry that the [IEE’s] model might be inferior to the conventional process for manuscript merit judgment is unfounded when recognizing the currently limited record of success for reviewing panels of alleged experts (Wardle 2010).
I’m not sure that money and having one’s name listed on the paper are a huge incentive. Several journals list the editor who took charge of a paper, but can you remember who the editor was of the last paper you looked at? If the paper is a real turkey, people might look at the referees’ identities, but then such a paper should be spotted by the referees and editors anyway.
Having your comments published seem like a incentive, but it obviously doesn’t count as a peer reviewed publication, so I wonder how much importance would be attached to it.
BTW, the only evidence the editors give is the Wardle paper, also published in IEE. It’s not about pre-publication peer review, and only attempts to critique how the F1000 panel judged the importance of ecology papers. The relevance is dubious at best (and the paper itself isn’t terribly good either).
What of the conflicts of interest? Well, the reviews are solicited by the author, and possibly paid for too by them too. The contract (whether formal or implicit) is between these two parties, so the reviewer’s responsibility is to the author, who wants a good review (i.e. one which will lead to acceptance of the paper. This may not mean uncritical, but that any criticism can be handled in revision), and not to the journal, who want an honest (and reasonably objective) opinion. It’s difficult to see how this improves matters. Authors will never ask Dr. McGit for a review, because they know they’ll get panned. and Dr. McGit can ask lowly students or post-docs to review, because (a) they need the money to feed their macaroon habit, and (b) they’re too scared of what he’ll say to his colleagues when they ask for a job.
OK I’m exaggerating, but I think the point still holds: a referee will feel beholden to the author, and will want to please them, not a nebulous journal who may print their name in a year or two. Even if this bias isn’t conscious, I think it would still be there (anyone know of any studies, e.g. in psychology, that test this?).
What makes it worse is that the author can select their reviewers, so if they don’t like a review, they can just ask for another one from someone else, and don’t tell the journal. I can’t see how a journal can protect against that.
If the journal is to retain any integrity, it has to label manuscripts for which authors solicited reviews, and if the reviewers were paid by the authors, this should be declared as a conflict of interest. And would you trust a review process which would allow this?
I agree it may not work.
However, I don’t think there are insurmountable in-principle reasons why it is a bad idea.
Authors choosing reviewers is common, if informal (ie authors submit papers to journals with reviewer suggestions). I think this is formalised (public) in some journals, eg Biology Direct.
I think the author needs to declare to the journal that her/his proposed reviewers are independent, ie have not collaborated or are not colleagues of the author, and have no personal connection.
Again, no reason in principle not to pay reviewers, though if the author has paid them I agree this should be declared in the published version that names the reviewers.
If the ms is rejected after all that (how likely is that if the author has chosen agreeable reviwers?) then I think other journals could find refs’ comments useful – particularly if signed. Or at least, the second journal has a choice as to whether to use them or not, so it can’t do any harm.
I think there are lots of practical reasons why this process may not work, eg time and committment by reviewers, set-up and/or admin costs to the journal staff, etc. It will be interesting to see what happens. It’s a good thing, I think, that various models are being tried, at least.
Heh – I’d noticed the Wardle paper in IEE as well, and had even drafted a short post about it (in prep). I was wondering if F1000 had picked up on it…
Without commenting specifically on the new IEE review model, I will point out that, excepting the potential for financial exchange, it doesn’t really differ to one of the submission options PNAS used to offer. I’m not sure if they still allow authors/members to submit papers along with self-solicited reviews, but I’m pretty confident they used to. So, the IEE mdoel is not that novel an idea in ecology or evolution.
Sounds wrong on several counts. As an author, I have no problem with suggesting reviewers to the journal, but am going to be uncomfortable actually soliciting a review from someone I don’t know, without the legitimacy lent by a journal’s official correspondence. So I’ll just ask my mates (probably accentuating any bias that exists in selection of reviewers by age, gender, institution or nationality, which some of the IEE gang have previously railed against). And which pot of money do I use to pay them?
As a reviewer – well, like most scientists, I review a load of stuff anyway, with no compulsion to be publicly acknowledged. I keep a personal record of the numbers of papers I review for different journals, which I can summarise on my CV; otherwise I’m happy just to play my small part in the big machine. And realistically, how much cash would it take for me to agree to a review that I otherwise would not have agreed to do? £50? £100? Certainly not less than that.
As an editor, I can’t see what I gain from this, over and above authors suggesting potential reviewers (which is obligatory for plenty of journals now). Typically (and I think I’m typical in this) I’ll try to get reviews from both suggested refs (including ‘non-preferred’ ones, if no good reason has been given), and refs I have independently identified, to try to overcome the issue of people simply suggesting their mates (although I’m not sure authors often do this, actually). If an ms arrives with reviewers comments, there are two ways it could go. If the comments are pertinent and important, why are they not already incorporated in the ms? If they are not, it’s wasting my time to read them.
Maxine, a while back I think you were bemoaning the number of times that people start from the assumption that the peer review system is broken and needs a radical overhaul (this seems to be one of the recurring ‘ideas’ appearing in IEE, for example), whereas evidence (our stuff on gender bias, for example, or the letter in Nature last month about the ample supply of peer reviewers). More and more, reading about some of the odd alternatives people come up with, I’m coming round to the view that the current system is really rather good.
Thanks for your comments, folks. I can’t really say muchj other than "I agree". Maxine – does NPG re-use peer reviews? I would have though it would be ideal for manuscripts that get rejected from Nature for being too specialised, but which would fit into <i>Nature Belly Fluffomics</i>.
Mike – F1000 is aware of that paper. I mentioned it to Richard Grant, and he pointed out that they’re using data from F1000’s first year (before everything was running properly). If you’re writing a reaponse, blog it first so we can all join in the fun.
I think the current system works fine, at least for journals that have a decent editorial process. But a bit of experimentation is fine, too….(Note that I wrote that authors/refs would need to declare that they are not the author’s mates, etc;-) ).
Bob – NPG leaves it up to the author. If you are rejected from a Nature or NPG journal and want to use our manuscript transfer service to resubmit to another, you can resubmit your paper and referees’ reports to a new journal with one click. The receiving journal editor will know who the referees are for transfers between Nature journals, but not for transfers that involve an NPG journal (one that is not called "Nature x").
if you as the author want to resubmit to another NPG journal without including refs’ reports then you can of course do that too, you just have to use the usual submission system for the journal of choice, and not the transfer service. The editors of the journal you submit to won’t have any knowledge of previous submissions unless you tell them.
A separate issue with payments to referees is that many of them would not be able to accept payment anyway, eg US govt employees (NIH etc).
One more thought – if all submissions come with glowing referees’ reports (of course they will, you would not submit anything with negative comments), then the ref reports essentially become equivalent to letters of recommendation. The editor then makes a decision based on – what, exactly? Personal preference / prejudice? The only publication model I could see this working for would be a PLoS ONE style, ‘accept everything that’s sound’ one, where the purpose of the ref reports is just to say ‘I confirm that I think this is OK’. For a selective journal, it would just be sidestepping the rigour of proper peer review.
But if a manuscript comes in with glowing reports, an editor would probably be suspicious, and would want to solicit more reviews. So nothing is gained. One therefore wants good but not glowing reviews: there have to be things that are criticised, but which are not terminal.
Either way, the editor won’t receive reviews which says "this paper should be rejected", but that’s the option that’s needed if the review process is to be meaningful.
The neat thing about the blogosphere is that anyone can present whatever opinion they like without backing it up, so I’m not particularly bothered about Bob O’Hara’s opinion about my article on F1000 being ‘not terribly good’. However, what is of greater concern is his remark in a subsequent comment and which is incorrect, the source of which is apparently Richard Grant (employed as the ‘Information Architect’ for F1000, and therefore hardly an obvious choice for impartial comment on my article). My article evaluated F1000 ratings of papers published in 2005 (many of the ratings appearing in 2006 or later); F1000 ratings started appearing in 2000 and the first ecological ones in 2004, so plenty of time to sort teething problems. So, it did not ‘use data from F1000’s first year (before everything was running properly)’ as O’Hara states. Finally, this raises a very interesting issue. F1000 was actively advertising the wonders of their product and selling it for profit well before 2005. If their representative now claims that their product at that time had such big problems as to render my analysis irrelevant, then does that mean that at that time they were marketing a product that they knew to be dodgy?
Actually, I think Bob may have misunderstood what I said to him. I said the paper was looking at Ecology’s first year of evaluations, and we know it takes about 18 months for a Faculty to get up to speed. Sorry for that confusion.
Ecology is, of course, just one of about 40 Faculties: some older, some newer than that. If one were to deliberately set out to discredit F1000 then choosing ecological papers published in 2005 would be a good way of doing it.
Liz Allen published (doi:10.1371/journal.pone.0005910) a much more thorough and thoughtful analysis, and one that was ‘properly’ peer-reviewed, to boot.
I must really respond to this last point in case the reader is misled. My article was sent to two referees, selected by the Chief Editor and without my input, and the reviews were just as rigorous and critical as what one would expect for any manuscript. By the way, if anyone is interested in repeating my analysis for a more recent year then, by using the search engines in WoS and F1000, then it would be pretty easy to do and could be done in under three hours. I predict they would find a similar result.
As an addendum. Given the above criticisms of my article, I just quickly looked up the F1000 ratings of the 10 most cited papers published in 2008 in Ecology Letters (widely seen as the most selective ecological journal in ecology). Of these, 8 were not identified or rated by anyone in F1000, and none of the top 5. So, I think it is likely that an analysis performed on a more recent year would generate the same result.