On impact factors

They’re crap, aren’t they? Seriously.

Jenny writes that scientists need metrics that reward effort as well as luck. While that’s true, we also need metrics that aren’t capricious and as susceptible to gaming. At the day job, Bob Grant (no relation) tells us that a single paper in that well-known publication Acta Crystallographica – Section A gave it the second highest impact factor in the “science” category.

I signed up at ResearcherID yesterday, for gits and shiggles. I was stunned to find that five of my papers have an average of nearly 100 citations, mostly due to some antibody mapping work I did in my thesis (and my name is incorrect on the author list, natch). The others (and in my opinion some are much ‘better’ work) are barely cited at all.

And PLoS ONE, dear old PLoS ONE, excited because it has an Impact Factor of more than four. Ridiculous.

In what sane universe does any of that make sense? Oh, the Thomson Reuters one, that’s right. Even Eugene Garfield warns against mis-using the impact factor, but nobody appears to be paying attention.

“You should never use the journal impact factor to evaluate research performance for an article or for an individual — that is a mortal sin.”


But, as I was forcibly reminded at a conference in Charleston last year, people have been whinging about the Impact Factor for thirty years. And still we’re stuck with it. Still people are using it. Whaddya gonna do about it?

Back at the day job I’m writing a paper on alternatives–well, one in particular. And I know, I know that isn’t the answer. I’m not sure what the answer is. I just know that the current system, as we’re all saying, is unfair; and it’ll take a concerted effort to change things.

If you want to change things, of course. Maybe you’re happy with the status quo. Maybe all this talk is so much hot air and we should simply deal with it. What do you think?

About rpg

Scientist, poet, gadfly
This entry was posted in Literature, Rants and tagged . Bookmark the permalink.

23 Responses to On impact factors

  1. Stephen Curry says:

    Your Bob Grant doesn’t work at MIT by any chance…?

  2. Richard P. Grant says:

    Naw, he’s at the Philly office of The Scientist.

  3. Steve Roughley says:

    Hope you are going to include the Hirsh number….
    (And obviously, of course, you will cite said paper)

  4. Bob O'Hara says:

    I laid out some ideas to get alternatives a couple of years ago.
    I’ll read the nature stuff later, when I have time.

  5. Richard P. Grant says:

    Thanks Steve, Bob. #crowd-sourcingmyliteraturesearch

  6. Austin Elliott says:

    Didn’t I see Bob O’H tweeting his h-index the other day?
    ‘Fess up, Bob!

  7. Richard P. Grant says:

    Yeah, that’s what inspired me to check out the researcher ID place.

  8. Nathaniel Marshall says:

    NHMRC in Australia has within the last year put out a position statement to the effect that they will no longer accept IF as a measure of research output quality.

  9. Darren Saunders says:

    The NHMRC statement is really only paying lip service to the idea… After all, they just released 80 pages of bibliometric analysis of the performance of Australian health research which relies heavily on journal impact factors. They still obviously think IF is a valid measure… bangs head on keyboard…

  10. Richard P. Grant says:

    I had seen that Nat, thanks for the reminder. I wonder, are they saying they won’t look at IF when awarding grants, but they will still use it to assess how well the money was spent?
    So that might be inconsistent, but this is the country that gives you 5 litre pickup trucks and the slowest motorways on the planet.

  11. Tom Webb says:

    Nice post. Not quite sure what’s happening under the new government to the REF (Research Excellence Framework), due to assess research in UK Universities in 2013, but we were certainly given to believe very recently that, depressingly, the quality of the papers we submit as individuals will be judged largely, if not exclusively, on the IF of the journal in which they appear. Making the editors of journals like Nature and Science among the most important figures in UK science.
    And I’m now off for my annual review, where I expect to be berated for publishing papers in low IF journals, which I have done for a variety of (I think) very valid reasons (largely based around the readership I’m trying to reach).

  12. Richard P. Grant says:

    Thanks Tom.
    I know there was a great deal of fuss about the REF, consultative exercises and whatnot, but essentially it was all fluff and they changed nothing.
    Good luck with the review. That’s a fantastic point about reaching readership.

  13. Åsa Karlström says:

    Interesting post Richard. It’d be interesting to see if impact factors could be changed, their impact that is.
    The whole thing with readership is one of those things I realised looking over the IF and “more niched” [can’t spell, meaning ‘focused?’] journals since the latter usually have slightly lower IF than the big journals with a broader audience…
    I was surprised to see my researchID citation profile too (more modest than 100 citation of course, but fun nevertheless. someone has read my papers! 🙂 that’s always a good feeling )

  14. Nathaniel Marshall says:

    You may be glad to know that the journal quality bands have been seen in action in the latest round (A* A B and C). The report you cite may simply be something that was commissioned before the new policy?
    I wonder what the more senior people on the committees that really make the decisions will talk about though? Still impact factors, I suspect. But I think a change is in the air.

  15. Richard P. Grant says:

    It’d be nice to have some of those senior people here on NN talking about that. But that’s a topic for another day, perhaps.

  16. Richard P. Grant says:

    Sorry, I missed Åsa’s comment. How did that happen?
    Anyway, do you think anything based on citations is useful for ranking? h-index or whatever? And especially seeing as my researchID seems to be missing a whole bunch of citations.

  17. Brian Derby says:

    Richard – You can edit your Researcher id entry to include nmissing papers. I think it does not have memory of past institutions.
    More amusingly, there is no check on the editing process to ensure that you are actually on the author list of the papers you claim! I discovered this by accident when trying to find out why my Researcher id h factor was higher than the one on Web of Science (odd because they use the same data base) when I found that I was apparently an author on a Nature paper. I have removed this misattribution and reduced my citation score by 200 p.a. However, if people start to use Researcher id to assess people (and you can access anyone’s citation data if they have registered), a bit of accidental data manipulation could do wonders for the score.

  18. Richard P. Grant says:

    I didn’t think it was institution-based in any meaningful way.
    Thomson Reuters are, as you know, part of the ORCID initiative. But the way that’s headed, I’m worried it won’t fix the problem you mention, because it’s opt-in.
    Every researcher should be ID-stamped at birth. Fact.

  19. Nicolas Fanget says:

    Microchips for all post-grads! This way the PI can also control for lab attendance.

  20. Richard P. Grant says:

    Shh, Nicolas. I’m already planning that for PIs, so that rats can know where they are when planning to skive off seminars, &c.

  21. Åsa Karlström says:

    Richard> It’s funny you should ask (I’ve pondered about it), it’s something about the metrics I guess? That it can be measured – your importance/impact on the field. And we all want something to measure our efforts since papers is one of those important things?
    Of course, the whole thing is counting on that all fields are interesting for lots of research groups and that you are in the most important research group with the hottest research that everyone wants to biuld on. And this isn’t reality as much… (vetrinarian stuff? not so hot, but boy is it nice to have some research on it?!)
    I guess it could show some kind of “acceptance by your peers” (maybe even on who can promote their research as hot and great) since I think that it shows more of “who’s hot in the field” and “who is out there for people to know” right now. It’s not as much about the quality of research per se , unless people put “appeal as quality research”, which I don’t really do since I think you can do great quality of research but in a subfield/subgene/whathaveyou where people need the research as “more info” even if the grant givers don’t see a potential money maker.

  22. Richard P. Grant says:

    Well for sure, but the assumption is that citing a paper gives that paper some kind of validity, or quality.
    Whereas we could all be citing a paper to say DANGER WILL ROBINSON. THIS IS CRAP.

  23. Åsa Karlström says:

    Richard, yes… there is that 😉

Comments are closed.