Opening peer review for inspection and improvement

ASAPbio Peer Review Meeting

For me the most memorable event at last week’s ASAPbio-HHMI-Wellcome meeting on Peer Review, which took place at HHMI’s beautifully appointed headquarters on the outskirts of Washington DC, was losing a $100 bet to Mike Eisen. Who would have guessed he’d know more than I did about the intergalactic space lord and UK parliamentary candidate, Lord Buckethead? Not me, it turned out.

Mike was gracious enough not to take my cash, though now I owe him multiple beers over the coming years. I doubt that the research community will have fixed peer review within that timeframe – there are no one-size-fits-all solutions to the multiple problems that were discussed at the meeting – but I did at least come away with a sense that some real improvements can be made in the near future. The gathering was a heady mixture of expertise, passion, ambition, blue-skies thinking and grey-skies reality checks.

I won’t attempt to summarise all of our deliberations. My brain no longer works that way and in any case the super-capable Hilda Bastian did it as we went along; (and there is also a brief report in Science). Hilda was one of several people I knew from the internet whom I met in the flesh for the first time in Washington. For all our technology, nothing can yet touch the levels of sympathetic engagement that comes from encountering your peers in the real world. It is still the best place for exchanging ideas — and for (ahem) exposing them to sobering correction.

After all was said and done, there were three ideas that I hope will endure and soon become more widely adopted. Each carries a modest cost but the benefits seems to me to be incontrovertible.

1. Open Peer Review: There are many definitions of ‘open peer review’ (at least 22 according to Tony Ross-Hellauer), but the one I have in mind here is the practice of publishing the reviewers’ anonymous comments, alongside the authors’ rebuttal, once a paper is finally accepted for publication. To my mind this degree of openness improves peer review because it incentivises professional conduct on the part of reviewers, thereby reducing the scope for sloppy work or personal attacks. It also makes the conduct of science significantly more transparent. While there was some concern that special interest groups operating in contentious areas of research like vaccines and climate science might derive ammunition by cherry-picking critical reviews, I am firmly of the view that the research community has to be ready to defend itself in the open. Closing the door on our proceedings and expecting the public to trust is will only fuel those who are already too quick to malign the scientific establishment as a conspiracy.

There are yet more benefits. Opening peer review helps to reveal the human and disputatious nature of progress in research. That will dispel the gleam if objective purity that sometimes clings, unhealthily, to the scientific enterprise. Being more open about how science works will build rather than undermine public trust. It will also help to burnish the reputation of journals that insist on rigorous peer review, and expose the so-called predatory journals that levy hefty article processing charges while providing no meaningful review.

Open peer review also paves the way for reviewers to claim credit for their work. This can be done anonymously via systems like Publons, which liaises with journals to validate the work of reviewers. Greater credit can of course be claimed if the reviewer agrees to sign their review, since this allows them to be cited and accessed more effectively (especially if they are assigned a DOI – digital object identifier). However, views at the meeting on whether reviewers’ names should be made public were mixed. There are concerns that early (and not-so-early) career researchers might pull their punches if reviewing the work of more senior people. And there are risks, as yet untested as far as I know, of possible legal action against reviewers who make mistakes or who criticise the work of litigious researchers or corporations.

The overheads associated with publishing reviews are not negligible. Some effort will be required to collate and edit reviews (e.g. to remove personal or legally questionable statements, though this should reduce as reviewers become accustomed to openness); and there are technical hurdles to the inclusion of publishing reviews in journal workflows. However, none of these is insurmountable, since several journals (e.g. EMBO Journal, PeerJ, Nature Communications) are already offering open peer review.

As Ron Vale wrapped up proceedings on the main day of the conference, I could sense him urging the room with every fibre of his being to recognise open peer review as an idea whose time has come. I think he’s right.

2. Proper acknowledgement of peer reviews by early career researchers (ECRs): Although the vast majority of peer review requests are issued to established researchers, in many cases the work is farmed out to PhD students and postdocs. When done properly, this can provide valuable opportunities for ECRs to learn one of the important skills of the trade, but too often their contributions are unacknowledged. Either the principal investigator does not bother to explain to the journal that the review is entirely or partially the work of junior members of their lab, or even if they do, the journal has no mechanism for logging or crediting that input.

It became clear at the meeting that this is an issue for ECRs and a sore one at that. Ideally of course, the fix would come from PIs being more transparent about how they handle their reviewing caseload, but a more effective solution would seem to be for journals to issue clearer guidelines – both to enable PIs to recruit ECRs to the task and to ensure that the journal formally recognises their contribution. Services such as Publons that offer to validate peer review contributions could also help out here.

3. Add subjective comments to ‘objective’ peer review: This is a counter-intuitive one. I am a fan of the ‘objective’ peer review established at open access mega-journals such was PLoS ONE, where the task of the reviewer is to determine that the work reported in the submitted manuscript has been performed competently and is reported unambiguously, but without any regard for its scientific significance or impact. But, as was pointed out in one of the workshops at the meeting, reviews adhering to these criteria may well be devoid of the full richness that the expert reviewer has brought to their close reading of the manuscript. It was therefore proposed (though Chatham House rules prevent me from crediting the proposer) that this expert opinion should be added to the written review. It should not form any part fo the decision to publish but inclusion of this additional information – on the significance of the study and the audiences that would be most interested in it, for example – would provide a valuable additional layer of curation and filtering for the reader.

This proposal may be tricky to implement because the editors of mega-journals already have enough trouble getting some reviewers to stick to the editorial standard of ‘objective’ peer review. But it does not seem impossible to add a separate comment box to the review for to ask the reviewer’s opinion – as a service to the reader – while making it clear that this will have no impact on the publishing decision. To me this is only a small additional ask of the reviewer, but one whose value is obvious. Such a move would also be a further barrier to the rise of the so-called OA predatory journals.

And finally… it would be remiss of me not to mention DORA, the San Francisco Declaration on Research Assessment which is endeavouring to wean the research community from the nefarious effects of journal impact factors. While the meeting was focused on peer review of research papers, it is also an important component of decisions about hiring, promotion and grant funding. The new steering group of DORA, of which I am now chair, were grateful for the opportunity to announce the reinvigoraton of the initiative and to discuss how DORA might help the community to develop more robust methods for research and researcher assessment.

And that’s it. Of course, you should feel free to offer peer review in the comments…

This entry was posted in Academic publishing, Science and tagged , , , . Bookmark the permalink.