It’s that time of year when all clear-thinking people die a little inside: the latest set of journal impact factors has just been released.
Although there was an initial flurry of activity on Twitter last week when the 2015 Journal Citation Reports* were published by Thomson Reuters, it had died down by the weekend. You might be forgiven for thinking that the short-lived burst of interest means that the obsession with this damaging metric is on the wane. But this is just the calm before the storm. Soon enough there will be wave upon wave of adverts and emails from journals trumpeting their brand new impact factors all the way to the ridiculous third decimal place. So now is the time to act – and there is something very simple that we can all can do.
For journals, promotion of the impact factor makes a kind of sense since the number – a statistically dubious calculation of the mean number of citations that their papers have accumulated in the previous two years – provides an indicator of the average performance of the journal. It’s just good business: higher impact factors attract authors and readers.
But the invidious effects of the impact factor on the business of science are well-known and widely acknowledged. Its problems have been recounted in detail on this blog and elsewhere. I can particularly recommend Steve Royle’s recent dissection of the statistical deficiencies of this mis-measure of research.
There is no shortage of critiques but the impact factor has burrowed deep into the soul of science and is proving hard to shift. That was a recurrent theme of the recent Royal Society meeting on the Future of Scholarly Scientific Communication which, over four days, repeatedly circled back to the mis-application of impact factors as the perverse incentive that is at the root of problems with the evaluation of science and scientists, with reproducibility, with scientific fraud, and with the speed and cost of publishing research results. I touched on some of these issues in a recent blogpost about the meeting; (you can listen to recordings of the sessions or read a summary).
The Royal Society meeting might have considered the impact factor problem from all angles but discovered once again – unfortunately – that there are no revolutionary solutions to be had.
The San Francisco Declaration on Research Assessment (DORA) and the Leiden Manifesto are commendable steps in the right direction. Both are critical of the mis-use of impact factors and foster the adoption of alternative processes for assessment. But they are just steps.
That being said, steps are important. Especially so if the journey seems arduous.
Another important step was made shortly after the Royal Society meeting by the EMBO Journal and is one that gives us all an opportunity to act. Bernd Pulverer, chief editor of EMBO J., announced that the journal will from now on publish its annual citation distributions, which comprise the data on which the impact factor is based. This may appear to be merely a technical development but it marks an important move towards transparency that should help to dethrone the impact factor.
The citation distribution for EMBO J. is highly skewed. It is dominated by a small number of papers that attract lots of citations and a large number that garner very few. The journal publishes many papers that attract only 0, 1 or 2 citations in a year and a few that have more than 40. This is not unusual – almost all journals will have similarly skewed distributions – but what it makes clear are the huge variations in citations that the papers in any given journal attract. And yet all will be ‘credited’ with the impact factor of the journal – around 10 in the case of EMBO J.
By publishing these distributions, the EMBO Journal is being commendably transparent about citations to its papers. It is not just a useful reminder that behind the simplicity of reducing journal performance to a single number is an enormous spread in the citations attracted by individual pieces of work. As Steve Royle’s excellent analysis reveals, the IF is a poor discriminator between journals and a dreadful one for papers. Publishing citation distributions therefore directs the attention of anyone who cares about doing evaluation properly back where it belongs: to the work itself. The practice ties in nicely with articles 4 and 5 of the Leiden Manifesto.
So what can you do? Simple: if in the next few weeks and months you come across an advert or email bragging about this or that journal’s impact factor, please contact them to ask why they are not showing the data on which the impact factor is based. Ask them why they are not following the example set by the EMBO Journal. Ask them why they think it is appropriate to reduce their journal to a single number, when they could be transparent about the full range of citations that their papers attract. Ask them why they are not showing the data that they rightly insist authors provide to back up the scientific claims in the papers they publish. Ask them why they won’t show the broader picture of journal performance. Ask them to help address the problem of perverse incentives in scientific publishing.
*The title is somewhat confusing since the 2015 JCR contains the impact factors calculated for 2014.