Doug Kell, chief executive of the BBSRC, published an enormous review article in 2009 on iron chelation and disease. The review had 2,469 references. (D. B. Kell BMC Med. Genom. 2, 2; 2009). I’m not sure what the record for a single article is, but that is certainly a large number of references to have read and digested for a review.
It is not surprising, therefore, to find Doug Kell speaking up in favour of review articles in a brief letter in this week’s Nature. He highlights a comment in a recent news piece in Nature which was critical of a new bibliometric tool because it included review articles:
Review articles, which may not add much to the research, count the same as original research papers, which contribute a great deal.
Well, you can understand what they are saying. A typical review article may be a useful round-up, but it does not usually report new knowledge. Kell’s mega-review is in a different league, and it is not surprising that he should defend the role of the review article in research. He points out that reviews turn facts into understanding:
A research paper usually provides just one or two new facts, whereas reviews synthesize our understanding more broadly and make it more concrete… some reviews summarize thousands of papers.
Open Access policies (at least, those of MRC and Wellcome) also seem to regard review articles as less valuable than original research articles. While MRC-funded and Wellcome-funded authors must deposit all primary research articles into PubMedCentral within six months of publication, they are not obliged to do the same for review articles. I think such a requirement might cause some problems as often reviews are commissioned specifically. But I wonder whether we will at some point want to extend the OA umbrella to review articles?
I completely agree with Kell’s response – the best (worst) review (research) articles can add much (no) value to the literature. [What an ugly way to repeat a simple point. Sorry]
However, I wonder if the funding councils have another, sensible motive behind only considering primary research articles. They want to compare apples with apples. Review articles gather disproportionately high numbers of citations, and given the limitations of our ability to compare researchers, citation numbers are (not always explicitly) used to compare researcher outputs.
Well, yeah, using citations to compare researcher outputs is bent, we know that.
I’m with Kell. A good review is worth a hundred original papers—at least from the point of view of the *consumer*. How do you get up to speed on a particular field? Not by reading the primary research literature, that’s a fact. Reviews are important, yet undervalued, I feel.
Here’s an interesting blog by Byron Jennings at TRIUMF posted the day after your comment
http://www.quantumdiaries.org/2012/10/05/grants-and-the-scientific-method/
I particularly liked his quote from Tukey “Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong questions, which can always be made precise”
Mike – I don’t think that the Research Councils’ OA policies are driven by their research evaluation programs. I think they just prioritise primary over secondary literature. Even those labels seem like a value judgement.
RIchard – you are right of course. When helping students who are looking for “articles on XYZ” I would try to get a feel for how much they already knew about XYZ, and suggest review articles or textbooks if they were just starting out in the field.
But, being cynical, I wonder about the motivations of authors and publishers in producing review articles? Is it always to synthesise knowledge in interesting new ways? Sometimes I suspect it is just to keep up an output of papers when there are no results flowing, or to keep your name in the literature and garner some useful citations to contribute to your h-index. For publishers reviews can be a cash cow, or help to bump up the impact factor of a journal.