Nature has an interesting news feature this week on impact factors. Eugenie Samuel Reich’s article — part of a special supplement covering various aspects of the rather ill-defined notion of impact — explores whether publication in journals such as Nature or Science is a game-changer for scientific careers.
The widely-held assumption is that they are. And the stories from young scientists interviewed by Reich*, who had almost all published in Nature or Science or Cell back in 2010, would appear to confirm that. Their papers in prestige journals won jobs or grants or opened doors to clinical trials that has previously been shut. Or at least that’s what they assume or believe or feel; no-one can quite be sure because the rules of the game are unspoken and unwritten.
But the trouble is that these unofficial rules appear nigh on universal. I was certainly mindful of them when I embarked on my research career more than twenty years ago (as I mentioned in my contribution to this week’s Nature Podcast). Regular visitors to this blog will be aware that since then I have modified my views and now see the excessive influence of impact factors as a kind of addiction for which the scientific community needs to find a cure.
It can be a hard argument to pitch because the culture of dependence is so embedded. The lure of high-impact journals is strong and underpinned by some rational motivations. As noted by Finnish scientist Annele Virtanen, one of Reich’s interviewees, the competition for publication in Nature of Science acts as a spur for scientists to be ambitious in their research. No bad thing of course but the trouble sets in when rewards are tied too closely to the particular achievement of a Nature or Science paper.
While it is certainly true that many of the papers published in these journals are of very high quality — and that on average they garner more citations than papers in journals focused on particular disciplines — it is too often forgotten that the impact factor not only disguises the very real variation in performance of papers in any one title but flatters them because its dubious method of calculation skews the measure of average performance significantly towards the higher end of the distribution.
The scientific community too often overlooks the granularity of the data and thinks only in terms of impact factors. Thus everyone who gains entry to Nature or Science wins an impact factor prize — irrespective of the actual significance of their work; and of course, those who narrowly fail to make the grade (in an assessment process that is highly stochastic) lose out. Virtanen, the recent beneficiary of this system, says “I can’t see so many bad sides” — a common enough view. But would she prefer to trust her future prospects to the uncertainty of getting her next paper through the narrow doors of the very top journals or would it be better to be able to rely on a system that does a more rigorous and fairer job of assessing what she has actually done?
There are positive moves in this direction. Sandra Schmid, head of cell biology at the University of Texas South-western Medical School, has recently taken steps to move away from the over-reliance on impact factors in her hiring procedures. I am pushing for similar revisions at my own institution. These moves chime with the recent San Francisco Declaration on Research Assessment which hopes to encourage all stakeholders in the scientific process — universities, funders, publishers and learned societies — to revise and improve their methods of assessment, specifically to eliminate the unhealthy lure of the impact factor. In the UK, the Wellcome Trust has long had a stated policy of not considering where applicants’ work is published when judging grant proposals, a policy now adopted by the UK Research Councils. But it is one thing to formulate a policy; quite another to make it work.
In the editorial accompanying this week’s supplement on impact Nature restates its long-standing opposition to the mis-use of impact factors in judging individuals or individual papers (a position that has been usefully repeated in other Nature-branded titles such as Nature Chemistry and Nature Materials). This is laudable but insufficient. It sits uneasily with the full-page adverts that appear annually with the announcement of yet another incremental rise in impact factor. This recent one, trumpeting Nature’s 2011 IF of 36 is accompanied by the strap-line ‘Energize your scientific career…’. What are researchers to make of that if not a repetition of the unspoken mantra that publication in top-tier journals is vital to real success in science?
The editorial also repeats the line that the obsession with impact factors is a problem for the scientific community to address. And that is true — it is a largely self-inflicted problem and it is primarily our responsibility to sort it out. But I find it odd that Nature appears to see itself as apart from that community, especially when its editors and reviewers are drawn from within it. I don’t think the dividing line is so easy to discern — especially given the supportive commentary of some Nature journal editors.
Where I do agree strongly with the editorial line is in the declared need for “for research evaluators to be explicit about the methods they use to measure impact.” In this, Nature, and indeed all scholarly journals can help — and at negligible cost. I call for them to publish all the data on which their impact factor calculations are based. Every year, when the new impact factor is released and advertised, please also publish the citation numbers and distributions on which it is based. This transparency will help to demystify the magical lure of that one number by revealing a truer picture of the performance of all the papers that contribute to it. It will reveal the variation in granular detail — the big hitters and the damp squibs. Comparison between journals would be enriched because the real overlap in the citation distributions — too often forgotten in the obsession with just one number — would be made evident.
PLOS is already leading the way in making this type of information available. Nature could do a tangible and valuable service to the scientific community by a simple act of transparency. It could blow away some of the clouds that are presently obscuring our judgement.
Now that I come to think of this proposal, I can’t see any bad sides.
Update (20-10-2013, 15:50) — If I’d had the time I would have read all of the articles in Nature‘s supplement before publishing this post and made a point of including a link to the piece by David Shotton on efforts to make citation data open.
*Update (22-10-2013, 15:33) — The original version of this post referred to the Nature author as Eugenie Samuel Reich, which is her full name; but she kindly informs me that her surname is simply Reich. The text had been modified to reflect this. Apologies for any confusion.