(This is a guest blog I wrote for the Research Information Network.)
I’m a fan of peer review.
There, I’ve said it. And I’m not saying it in the way that Sir Winston Churchill famously spoke of democracy; ‘the worst form of government except for all those others that have been tried.’ I’m also not talking as one with no experience of the peer review process, nor one with uniformly good (or bad) experience–I’ve had papers accepted without hesitation, I’ve had others improved substantially by the review process, and I’ve had a manuscript bounced around for two years until we found a sympathetic editor and reviewers who understood just what the bloody hell we were talking about.
I suspect that my experience matches that of the vast majority of jobbing scientists. We don’t have particular axes to grind, we just want to get our stuff published in as ‘good’ a journal as possible (and there’s a whole other can of fish to worry about) and we want to move on to the next experiment. Truth be known, we’d also like to see what our colleagues and, well, peers make of our work, and maybe even make constructive comments before the entire world gets to see it.
I’m also not a football fan of peer review. I won’t support this system unthinkingly, against all comers, waving my blue and white scarf above my head and throwing rolls of toilet paper at the opposition (actually, that attitude seems to characterize most opponents of peer review, but more on them in a bit). No; I recognize there are problems, and I’m certain it could be improved. Just don’t ask me how: I don’t know, for example, if single or double blind review, or complete openness would improve matters or make them worse, all things considered.
The thing is, peer review has been getting a bit of stick recently. Medical journals, especially, seem to get very worked up about it. The matter of Medical Hypotheses is another case in point. The Editor, Bruce G. Charlton, is embroiled in a fight with Elsevier, the publisher. Elsevier wants to make it peer reviewed, and Charlton thinks that will destroy the spirit of Medical Hypotheses. As if one editor is going to be in any way ‘better’ at assessing a manuscript’s ‘worth’ or ‘rightness’ (not newsworthiness–that’s different entirely and it’s something the crew at, say, Nature do very well) than any three peers–and let’s be explicit here, in this sense ‘peers’ means ‘experts in their field’, right? Anyway, I’m not overly concerned about the pros and cons of that case (except to point out that every professional in the field knows exactly how much worth to place on anything published there. It’s only the press–and the naif–that get confused about such things): rather I’d like to look (briefly, because it’s far too painful to spend much time on) at some of the comments on the Nature news article.
First, the repeated assertion that Nature, by virtue of having full-time editors, does not do peer review is patently ludicrous. How anyone who practises science could hold that opinion, moreover repeat it in a very public forum, is beyond me. At least the commenter had the grace to admit he was wrong, but this demonstrates the sheer level of misinformation and ignorance surrounding the entire issue. How can we have a reasonable debate when even those involved don’t know what they are talking about?
Second, we have the frothing-at-the-mouth prophets:
Peer Review should be universally rejected by all scientists and researchers as the thought control experiment that it is. Science is not Consensus! Truth cannot be discovered by vote no mater how smart the electorate thinks it is.
Right. There is so much wrong in those three little sentences that I really don’t know where to begin. Let’s just say that it’s another exemplar of people Just Not Getting It.
It gets better. It always does:
Peer Review as a concept, is no more than an easy way to influence, suppress, and control the direction and funding of scientific research. It is no less than tyranny and must be rejected as such.
Uh, OK, keep taking the tablets–actually no, get some different ones because those aren’t working.
Please, just one more?
Both Socrates and Galileo were “Peer Reviewed”. We all know how well that worked out.
Ah here we are. We’re dealing with someone who has had an obviously quite brilliant Idea that nobody will publish, let alone listen to, and the problem is not the Idea but peer review! If I had a quid for every loon who emailed me his theory on how everybody is wrong and how he (invariably a ‘he’; ‘she’s are far too sensible) has found the secret to life the universe and 42, only he can’t publish it because of the tyranny of peer review, and look how nobody believed Socrates or Galileo or Einstein, oh God especially poor old Einstein; well, I’d have fifteen pounds and sixty pence (that last one was a particularly sad and tragic affair).
Look, people: we’ve moved on from arresting or murdering people for having wacky scientific ideas–at least in the West. There is no conspiracy, no shadowy cabal that stops you publishing anything (and indeed, the internet makes publishing ridiculously easy). It’s far more likely, all things considered, that your Idea is so much dingo juice and you’re suffering a severe case of sour grapes. If your Idea is, actually, good; then Time will prove you right. It always does. But my money is on you being a wacko.
Even the company I work for is not trying to replace the initial round of peer review. Yes, our Chairman jumpstarted the Open Access revolution (and you’ll see why I was keen to establish my own bona fides, above), and yes, we’re really interested in what I’m calling post-publication peer review–assessing the likely impact and importance of the scientific output soon after the point of publication–but even we recognize the value of peer review as it stands today (notwithstanding arguments about anonymity and the like).
Ah! A voice of sanity and reason:
But people whose ideas, popular or not, that are backed up by sloppy research, or no research at all, should not be published until they can come up with the proper evidence supporting their claims. Coming up with adequate supporting evidence is another driving force behind science and ensures its credibility.
The point of peer review, actually, is neither to suppress nor promote good, bad, wacky, conventional, nuclear or world-changing Ideas. The major question that peer review is designed to answer, and is best at answering, is, “Is it done right?” It’s not some vast conspiracy to keep ideas down, nor to deny lunatics a forum or grant money. It’s there to help workaday scientists (some of whom will have brilliant, paradigm-shifting Ideas) do their research, without having to wade through a Stygian morass of ill thought-out crap.
We have to know our limitations, of course. Peer review suffers terribly from poison-pen reviewers. A field I worked in was almost destroyed because one PI kept trashing the community’s papers in review stage (she turned out to be mentally ill; that, also, is a story for another day). But that’s no reason to throw the F1 out with the autoclave bags. We need to identify and fix the problems with peer review, not destroy it entirely.
I need to address one last misconception. Peer review, done properly, might guarantee that work is done correctly and to the best of our ability and best intentions, but it will not tell you if a particular finding is right–that’s the job of other experimenters everywhere; to repeat the experiments and to build on them. Indeed, a friend of mine has been known to say, in public even, that most things in Nature are wrong. (And he should know–he’s a Nature editor.) He’s right of course–everything published will be superseded. But the point is, and the point we’re in danger of losing sight of to our great detriment as jobbing scientists, is that peer review done even half-arsedly cuts out a whole pile of junk and lets us get on with the real business of Science; that of finding shit out.
Bravo sir. An excellently argued and well constructed piece. I get so frustrated with the anti-peer review wackaloons that I avoided the comments section of that News article.
I have noticed that the minute you start frothing at the mouth, it’s because you’re either having a fit, or because you are a nutcase with an agenda. Be that religion politics, or peer review. You can’t argue with nutters. or you can, but it’s pointless. What you can do though, as you do very well here, is relentlessly present the evidence and ask them to judge. When they can’t or won’t they are exposed as the cocknozzles they are and are thus trumped.
Bravo.
and this:
Genius. I am nicking it for the next “grant” proposal I have to read.
Thank you, Ian. I’m nicking ‘cocknozzle’ in return.
bq. “You can’t argue with nutters. or you can, but it’s pointless.”
But Ian, you’re talking about a large chunk of my life…!
I try not to argue with the anti-vaccine or HIV denial types for precisely the same reasons, but every once in a while I slip and get involved. I should know better by now, really. An anonymous quote I found somewhere online, and which makes the same point Ian does, is:
Austin, I know what you mean. I also don’t know how you keep up the relentless enthusiasm for at least trying to enlighten the unenlightened. I find it so dispiriting, and more to the point, I have a very short fuse and I start to lose my temper too quickly. That can never end well because it either becomes a screaming match (pointless, unless you’re in the North End stands at Vicarage Road), or you immediately give away any moral high ground.
It’s worse than that, of course.
“Don’t wrestle with a pig. You both get dirty and the pig enjoys it.”
But sometimes you just gotta stand up and hit somebody.
I have to say I whole-heartedly agree with peer review. I agree, I’m not a “football fan” but I see the state of papers that do get published and sometimes they scare me.
I’d personally like to see Peer Review be used on a wider scale, and possibly extended to suppliers of products (meaning they get a read before it’s published, but the author should have complete veto power over their comments). It is not uncommon for me to find a paper that completely and utterly mis-used a product (as opposed to finding a new application for a product, which is great), such that any poor person trying to replicate the data will be doomed.
Either that or some way of proper citation of products would be helpful. Having had to “replicate” studies myself in order to forge into a new area of research, I can’t explain how frustrating it is to realize that I have no clue WHICH catalog number they purchased from Nunc, out of 25 options.
Of course, in the end, I guess it doesn’t matter because inevitably some products get discontinued.
Bravo, Dr. Grant! Well written. However, I would like to differ just a wee bit about the purpose of peer review. You said:
I submit that it is not the purpose of the peer review to answer anything. Peer review exists (or should exist) to raise the question, “Is it done right?” It is up to the authors/investigators to successfully defend their hypothesis/project/study and answer that question. And it is implied that they should be able to do that again, and again, any time a new question arises. If the science is sound, this should pose no problem.
Interesting take, Kausik. I can’t say I agree. I think our jobbing scientist wants to know in advance of reading the hypothesis/project/study that technically, everything is done right and that the appropriate controls have been done. That’s what I meant.
Kyrsten, you’re spot on about the frustrations with discovering exactly what has been used. Not sure if what you propose is workable (although some pioneering manuscript services apparently help the author get that kind of thing right, I think it’s more a matter of accurate note-taking).
If you can only repeat an experiment using one particular version of a reagent that is available in twenty different formulations in the Sigma catalogue, then it can’t have been a very robust finding. Sometimes ambiguity is a good thing.
Well, when it turns out that different tissue culture plastic leaches different amounts of plasticizers into the medium, sometimes it can be important. At least you’re able to tell if the non-workingness of your experiment is down to user error rather than non-robustness.
The thing I don’t get with Medical Hypotheses (and the wingnut commenters Richard discusses) is WHY the cranks are so obsessed with it and it’s importance.
[Well,I do, of course, but it is the classic hole in their argue-rants]
If peer review, and scientific journals, are “suppression, censorship and thought control”, as they tend to insist, why not just self-publish on the Web?
After all, 35-odd years ago when Horrobin set up Medical Hypotheses, there was no internet. Now there is.
So who needs journals, owned by corrupt New World Order MegaCorps etc etc.?
*Unless,* of course, the cranks crave the appearance of having had the peer review they so loudly dismiss as corrupt.
– And hence the veneer of respectability.
– And hence the ability, as Richard alludes to, to use this appearance and veneer to gull the unwary.
It seems to me (and I’m quite prepared to be wrong) that the main problem with peer review (Other than the extreme cases and nutters) is that some reviewers are not very good, either they don’t really read the paper properly because they are too busy, the are almost absurdly critical or they write the review in unnecessarily aggressive language. Of course many reviewers are terrific and all work extremely hard – but I’m not talking about the good reviews.
There doesn’t seem to be any real attempt to train post-docs in peer review, one day you make it to PI status and start reviewing other peoples work, and I wonder if this would be a good idea. If your ideas about how to approach peer review come from your experience of having your work reviewed we are probably perpetuating the bad practises.
I think that if there was decent training in reviewing, Sam, we might solve a lot of the problems (just like, inter alia, we’ve discussed training in writing and presenting).
That’s a lovely logical flow, Austin. You might be right.
I notice nobody has yet picked up on my sour grapes argument.
Samantha
I agree many reviewers are well off the boil. But not every lab is like yours.
I’m a postdoc who was taught to review when I was a PhD student in a completely different lab. I spent most of yesterday morning helping a PhD student review their first paper at the behest of my boss.
It’s not a formal process but in my n=2 experience there is a culture in some cases of a concerted effort to train the PhD students in how to review papers. That way the postdocs already know.
Austin- That makes a lot of sense.
Richard
You bastard. I just went and looked up dingo juice in the hope it was an amusing piece of colloqiual Australiana I was unaware of.
Dingo Juice – As seen on TV
’nuff said
Nat, that’s fantastic, but I reckon it should still be formalized.
Um, the tutoring thing that is, not the dingo juice.
I think journals have a great responsibility to explain clearly to reviewers what is expected of them – beyond simply presenting boxes to be ticked and multiple choice score sheets to be filled.
Not all of them shoulder this burden, but Mark McPeek (EiC of the American Naturalist) has gone to the trouble of writing a guide for reviewers on his personal blog: So you want to be a great reviewer. Well worth a read for people from all fields.
Richard – you are so right in starting that peer review is not a vote, nor able to filter out everything that is wrong. However, it is not solely about answering the question “is it done right?” either.
As we review, we also check whether credit is awarded to others properly and justly, whether findings, methods, results and discussions are contextualised properly and how the main argument of the paper fits within the available corpus of current literature. Through doing so, we compare it with the status quo. Which is – of course – a good thing, since the status quo is the status quo for many reasons.
The big paradigm shifting ideas and the incremental steps forward have a very different relationship to the status quo. Peer review is perfectly able to deal with the incremental steps, but evualuating something that is truly new is very difficult when your task is to compare it to the current state of affairs and judge whether it is a worthwhile contribution.
Thanks for that, Mike.
Bart, ‘doing it write’ surely includes those things, once you decide science is more than the sum of its experiments?
I disagree that it’s comparing with the status quo, except inasmuch as the status quo is how we do things, in a Popperian paradigm.
Meh. Anyone who disagrees with the Popperian approach is a total Kuhn.
Nathaniel, I wasn’t trying to suggest that my current lab does nothing to train people but there is little systematic attempt to train post-docs in reviewing. Some PIs do, mine is pretty good and it sounds like your experience has been great, but I know many people who have just been thrown in at the deep end and it seems to be left very much down to indiviual PIs whether they train people or not.
Mike I agree about journals making it clear what is expected of reviewers and there is also a role for editors in rejecting the worst reviews and perhaps deciding not to use particular reviewers again – probably many already do this, I’d be interested to hear what some of the editors here think do they reject many reviews?
I love the standard of puns you get on NN.
Sam, you’re right that there’s no formal training. I think that would be far more useful than (say) learning LaTeX :rolls eyes:
That’s a great question, and I’d love to hear Maxine, Henry et al say something to it too.
For what it’s worth, most of us at Nature and the Nature RJs try to do our bit to train the next generation in peer review as well.
We regularly ask senior PIs who are too busy to review a given paper for us to recommend recently minted PhDs they’d trust to do the job in their stead. And I often find that green referees do a more thorough job than many of our old hands.
This is probably partly because they have more time. And for many it represents a sort of rite of passage to be asked to review for the GOJ (Grand Old Journal). But mostly, I think the young are simply more skeptical of the claims of their peers than the old.
The best part is that when they are unsure of what we expect from them, they’ll usually ask us to clarify what we expect of them (which we’re more than happy to explain to them). And they’re less likely to be offended by our advice on the rare occasions that they miss the mark (usually in paying too much attention to detail).
Of course, this isn’t a particularly efficient or high-throughput means of training scientists in the gentle art of peer-review… but it’s something.
Oh, and as for the gender of crazy people, we at Nature Physics get our fair share of she-cranks too. Obsessive lunacy isn’t an invariably male trait… just mostly male.
That’s great, Ed; and an interesting bit of psychology, too.
Yes, we at Nature receive nutty communications from both genders, also.
Ha! Nice one, Maxine.
I think Samantha makes an interesting point. How many PIs give manuscripts to review to their postdocs and grad students? Lots, I think. And I believe that experience in having had papers reviewed (and reading and addressing the comments) and in having reviewed papers oneself, is key in doing a good job. It doesn’t guarantee it of course, but it helps.
That said, reviewing papers ain’t rocket science folks, but Samantha’s right in that your “average” postdoc or student hasn’t the experience to do it well. Not hard to learn, but does need to be learned. And, as she notes, like so many other things (*cough* presentation skills cough), there’s no “real” training for it.
Also – agree with Ian – well done RPG for using the term “Stygian morass”.
Hi Richard, thanks for writing this – I was hoping one of the NN establishment might wade in with such a great rebuttal of the comments.
I came across the news story quite late and left a naive comment of my own (before I read your blog post). I’m not trained in peer review processes so I understand a tiny part of the frustration some people feel with it – it is rather mysterious to the uninitiated. I agree with everything you say in its defence, but maybe someone experienced can help me out with a quick education – are peer review comments always confidential? If so, what are the main arguments for not allowing them to be public? With all the supp info available online now, could comments associated with returned/published drafts of a paper be added? Maybe if people could see that they don’t run along the lines of ‘yeah, this guy’s great, I know him from college – publish’ they wouldn’t be so quick to cry conspiracy.
On NN puns, it is often a humbling experience to read these comments. Looked up Popperian and Kuhn (I’m really revealing my ignorance now) – thank you. Also looked up dingo juice – some things I’m not ashamed I didn’t know about!
You’re welcome, Leah. I was surprised that nobody before you had but the muppets right on that thread–I guess they were all in shock?
Peer review comments are not always confidential; indeed, people like EMBO (um, help me out–EMBO J? EMBO Reports?) are publishing (some of the) reviewer comments online once a MS is accepted.
And don’t worry about the dingo juice. We’re all friends here. Well, except for Austin; he’s just a curmudgeon.
EMBO Journal has started publishing peer reviews as an experiment – see here. So far, the scheme is voluntary.
Biology Direct has been doing this for longer, sometimes with quite interesting results, which I’ve written about before.
Nice one, SCurry–thanks.
Richard
It is formalised in medicine and epidemiology. In addition to the more informal tutoring sessions I was talking about above, I’ve both sat and taught papers in how to review papers.
These are usually requirements for public health degrees in general. Typically they are labelled Research Methods papers but they are really about critical reading skills and reviewing.
Awesome, Nat. That reminds me that a colleague in Sydney taught critical thinking in one of his courses (and a great example I should blog about) but I don’t know if he covered
reviewing specifically.
Thanks for the clarifications about confidentiality. For anyone else wondering like I was, Stephen’s link has loads more. Cheers guys.
Richard
Mike (Apr 16): The status quo I was attempting to refer to, can be interpreted as the “normal science” Kuhn refers to… a set of protocols, theories and facts we all agree to work with and to use as standards.I thought we didn’t deal with facts–just things that haven’t been disproved yet.
Anyway, people publish new theories and protocols all the time. So that doesn’t work.
“Fact” would be the temporary status of a claim until disproved… The new theories and protocols, as well as new facts, would still be part of “normal science” if they (snuggly) fit within the worldview/paradigm/existing corpus of theories, protocols and facts. The less snuggly, the less “normal science”.
Anyway… while a discussion on Popper, Kuhn or even Latour’s take on science and how scientists come to agree with one another about stuff is perhaps best reserved for elsewhere…
Bart, I think you’re off base there.
According to the American Physical Society, at least, science
by its very nature [] questions prevailing ideas.
The crux is in what is being questioned. If the questioning (or the proposed novelty) is not so very big it may fit within the understanding of the world as we know it. It does most of the time. Sometimes, this is not the case because a fundamental understanding of the world as we know it is being questioned or a novelty is proposed which does not (easily) fit within our understanding. In that case, (in Kuhnian terms) an anomaly has been produced.
So the APS is right, of course. If is isn’t about questioning (or novelty) it wouldn’t be science. But how new or different the questions or propositions are, is what is the issue.