Regulars at this blog will be familiar with the dim view that I have of impact factors, in particular their mis-appropriation for the evaluation of individual researchers and their work. I have argued for their elimination, in part because they act as a brake on the roll-out of open access publishing but mostly because of the corrosive effect they have on science and scientists.
I came across a particularly dispiriting example of this recently when I was asked by a well-known university in North America to help assess the promotion application of one of their junior faculty. This was someone whose work I knew — and thought well of — so I was happy to agree. However, when the paperwork arrived I was disappointed to read the following statement the description of their evaluation procedures:
“Some faculty prefer to publish less frequently and publish in higher impact journals. For this reason, the Adjudicating Committee will consider the quality of the journals in which the Candidate has published and give greater weight to papers published in first rate journals.”
Which means of course that they put significant weight on impact factors when assessing their staff. Given the position I had developed in public (and at some length) I felt that this would make it difficult for me to participate. I wrote to the institution to express my reservations:
“…I think basing a judgement on the name or impact factor of the journal rather that the work that the scientist in question has reported is profoundly misguided. I am therefore not willing to participate in an assessment mechanism that perpetuates the corrosive effects of assessing individuals by considering what journals they have published in. I would like to be able to provide support for Dr X’s application but feel I can only do so if I can have the assurance of your head of department that the Committee will work under amended criteria and seek to evaluate the applicant’s science, rather than placing undue weight on where he has published.”
The reply was curt — they respected my decision for declining. And that was it.
I feel bad that I was unable to participate. I certainly wouldn’t want my actions to harm the career opportunities of another but could no longer bring myself to play the game. Others may feel differently. It was frustrating that the university in question did not want to talk about it.
But perhaps things are about to take a turn for the better? Today sees the publication of the San Francisco Declaration on Research Assessment, a document initiated by the American Society for Cell Biology (ASCB) and pulled together with a group of editors and publishers.
The declaration, which has already been signed by over 75 institutions and 150 senior figures in science and scientific publishing, specifically addresses the problem of evaluating the output of scientific research, highlights the mis-use of impact factors as the central problem in this process and explicitly disavows the use of impact factors. I can hardly believe it. This is the research community, in its broadest sense, taking proper responsibility for how we conduct our affairs. I sincerely hope the declaration becomes a landmark document.
All signatories, whether they be funding agencies, institutions, publishers, organisations that supply metrics or individual researchers, commit themselves to avoiding the use of impact factors as a measure of the quality of published work and to finding alternative and transparent means of assessment that are fit for purpose.
The declaration has 18 recommendations — targeted at the different constituencies. The first one establishes its over-riding objective:
“Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”
The remainder go into more detail about what each of the different players in the business of science might do to escape the deadening traction of impact factors and develop fairer and more accurate processes of assessment. By no means does this spell the end of ardent competition between scientists for resources and glory. But it might just be a step towards means of evaluation that are not — how shall I put it? — statistically illiterate.
I urge you to download this document (available as a PDF) read it and circulate it to your colleagues, your peers, your superiors and those junior to you. Tell everyone.
And of course, you should sign it.
Update 17th May, 18:28 — I have been discussing my decision — mentioned above — not to participate in the review of a promotion candidate over at Drug Monkey’s blog. He is very critical of my stance and I think may have a point (see his comment thread for details). As a result, while I have not changed my view of the reliance that the selection procedure at the institution involved places on the journal names, I emailed them this morning to offer my services as a reviewer (their deadline has not yet passed). I also pointed out this blogpost and Drug Monkey’s reply by way of explanation but also with a view to pursuing a discussion about their selection process. If they take me up on my offer, I think I can provide a review and incorporate into it my concerns about the implicit reliance on journal impact factors.