Pride and Prejudice and journal citation distributions

It is a truth universally acknowledged that a researcher in possession of interesting experimental results, must be in want of a journal with a high impact factor.

It is also true – and widely understood – that journal impact factors (JIFs) are unreliable indicators of the quality of individual research papers. And yet they are still routinely used for that purpose, despite years of critique, despite the San Francisco Declaration on Research Assessment (DORA), despite the Leiden Manifesto, and despite The Metric Tide report.

But today sees the arrival of a new initiative to challenge the mis-use of JIFs in research assessment. I have joined forces with bibliometrician Vincent Larivière, and co-authors from PLOS, eLIFE, the Royal Society, EMBO Journal, Science, and Springer Nature, and together we have published a new paper, A simple proposal for the publication of journal citation distributions, on the BioRxiv preprint server.

Title

The JIF, calculated each year as the mean number of citations to papers published in a journal in the previous two years, is the metric that will not go away. Its longevity has at least two sources. First, it is beloved of journal publishers (despite the criticisms often voiced by journal editors), who see it as a valuable tool in brand management. The good opinion of authors and readers is, quite reasonably, good for business. Second, the JIF is easily elided with prestige in the minds of researchers and their institutional managers. Pride in our reputations matters to us, and for good reason – science is quintessentially a human endeavour. But that elision confers on the JIF a seductive legitimacy in research assessment, giving rise to the well-known prejudices with regard to its influence on career progression.

Our proposal aims to bring some cool reason to this troubled situation. We are asking journals to publish the citation distributions that underlie the JIF (using the simple protocols detailed in the paper). The move is avowedly pragmatic: we recognise the reality of impact factors but, by facilitating the generation and publication of journal citation distributions, we aim to raise awareness of the broader picture that JIFs conceal. In doing so, we want to focus the attention of assessors on the merits of individual research papers.

I have already laid out the reasons for publishing citation distributions in three previous posts, so won’t repeat the details here. In any case, the argument is summarised in our brief preprint, which I would very much like you to read.

There is nothing especially new or original in our approach –  except, and this is something that gives me particular pleasure and stirs my expectations, that it is the product of a constructive collaboration with several well-known publishers. I hope their example will soon be followed by others.

We harbour no illusions about this paper quickly neutralising the distorting pull of JIFs on research assessment. Nevertheless, our proposal is simple, transparent and reasonable. It is a feasible step in the right direction one that – with luck – will soon be universally acknowledged as such.

 

P.S. Our paper is a preprint and we would very much welcome critical comments and suggestions as to how it might be improved. Please comment at bioRxiv.

 

This entry was posted in Academic publishing, Open Access and tagged , , , . Bookmark the permalink.