They’re crap, aren’t they? Seriously.
Jenny writes that scientists need metrics that reward effort as well as luck. While that’s true, we also need metrics that aren’t capricious and as susceptible to gaming. At the day job, Bob Grant (no relation) tells us that a single paper in that well-known publication Acta Crystallographica – Section A gave it the second highest impact factor in the “science” category.
I signed up at ResearcherID yesterday, for gits and shiggles. I was stunned to find that five of my papers have an average of nearly 100 citations, mostly due to some antibody mapping work I did in my thesis (and my name is incorrect on the author list, natch). The others (and in my opinion some are much ‘better’ work) are barely cited at all.
And PLoS ONE, dear old PLoS ONE, excited because it has an Impact Factor of more than four. Ridiculous.
In what sane universe does any of that make sense? Oh, the Thomson Reuters one, that’s right. Even Eugene Garfield warns against mis-using the impact factor, but nobody appears to be paying attention.
“You should never use the journal impact factor to evaluate research performance for an article or for an individual — that is a mortal sin.”
But, as I was forcibly reminded at a conference in Charleston last year, people have been whinging about the Impact Factor for thirty years. And still we’re stuck with it. Still people are using it. Whaddya gonna do about it?
Back at the day job I’m writing a paper on alternatives–well, one in particular. And I know, I know that isn’t the answer. I’m not sure what the answer is. I just know that the current system, as we’re all saying, is unfair; and it’ll take a concerted effort to change things.
If you want to change things, of course. Maybe you’re happy with the status quo. Maybe all this talk is so much hot air and we should simply deal with it. What do you think?