In which numbers lie – except when they flatter us

Bibliometrics have been making me cross recently.

In the past month, I’ve stumbled across two instances where journal impact factors were being used in a grossly inappropriate way to assess the worth and quality of scientist colleagues. This exposure in turn has really hammered home the inanity of our profession’s obsession with measuring the immeasurable.

I don’t want to compromise anyone’s privacy, so let’s call the two people involved Timothy and Anna (not their real names). Timothy is an early-career researcher in another London university who went to speak to the person in charge of marshalling the troops for the upcoming Research Excellence Framework (REF), a national exercise used to assess the quality of a university’s research output. The better the research, the more money the university gets allocated in the future. So it’s an incredibly big deal that has grown into a monstrously complex bureaucratic nightmare spanning years and consuming many man-hours of effort. Timothy had a few great papers under his belt and was eager to make his contribution. The person he went to see, however, took a quick look at CV and said that it didn’t appear that any of his papers were of a high-enough quality. Timothy was surprised to hear this, and asked how she could tell this just by scanning her eyes down a CV for approximately sixty seconds.

“The journals you’ve published these in,” she explained, “aren’t four-star. So I can tell you right now that we won’t be using your papers in our final REF return.”

Timothy was so shocked that he couldn’t think of anything to say to this; he just took his CV back and retreated, flushed with humiliation. Later though, when I ran into him on the street, he was starting to get angry. I offered to buy him a drink, over which he told me the whole story. What, he wanted to know, did “four-star” mean, anyway? Two of his papers were in the most prestigious and well-regarded specialist journal in his field. Was “four-star” really just code for Science, Nature, Cell or one of their high-impact sister journals? Could it really be that only papers published in these journals were worthy of note? I told him that similar assessments had been made about people at my university, and I’d seen on Twitter that such practices were widespread across the UK.

The next day, Timothy fired off an email to someone higher up at the university who was coordinating the REF and asked if there was an official list of journals ranked by “star”, and if so could he have a copy for reference? That person wrote back immediately, saying that there was no ranking list of journals, as HEFCE are “adamant” that journal impact factors or such similar rankings will not be used by the assessment panels in REF.

Curious, I looked at the REF website and easily found the actual clause that makes this clear:

No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs. An underpinning principle of the REF is that all types of research and all forms of research outputs across all disciplines shall be assessed on a fair and equal basis.

Clearly an injustice had been done. Timothy didn’t know what to do, so I urged him to speak to an eminent professor in his field and get a second opinion about the quality of his papers. I’m happy to report that this divine intervention did the trick, and some of Timothy’s papers were judged to be excellent and worthy of including in the REF return after all. But I wonder how many others are being unfairly judged, and what will happen to them if they don’t have the courage to complain? Although not being included in the REF looks bad for the individual, rocking the boat is not always something that people feel safe doing – especially early-career researchers whose own positions are not yet secure.

Anna is another friend of mine, an early-career researcher at a different Russell Group university further north in Britain. She’s on an independent fellowship at the moment, but this expires in one year and she’s been on the lookout for a permanent position within the same department. When such a job was advertised, she jumped at the chance to try to win the post in a robust competitive process. She ticked every singe box that the job advert wanted: great papers, both in quality and quantity; experience with intellectual property; evidence of bringing in lots of external funding; and great synergy with the work the rest of the groups were doing. The advert also made a prominent statement that women were particularly encouraged to apply because they were currently under-represented in the department. But she was rather shocked not even to be short-listed. When she made some inquiries, it transpired that the committee had triaged applicants solely on whether they had “big” papers. She was afraid to ask what “big” meant, but assumed that it referred to our old friends, the Cell/Science/Nature trinity. Her application had been packed with amazing achievements, but apparently nobody had even bothered to get beyond the publication list.

Much has been said about how impact factors are not a good judge of the individual paper, let alone of the author who wrote it. Recently my fellow Occam’s Typewriter Stephen Curry posted an excellent piece, from which I’ll extract one salient nugget that nicely summarizes the problem (but do read the whole piece, as it’s wonderful):

Analysis by Per Seglen in 1992 showed that typically only 15% of the papers in a journal account for half the total citations. Therefore only this minority of the articles has more than the average number of citations denoted by the journal impact factor. Take a moment to think about what that means: the vast majority of the journal’s papers — fully 85% — have fewer citations than the average. The impact factor is a statistically indefensible indicator of journal performance; it flatters to deceive, distributing credit that has been earned by only a small fraction of its published papers.

As Stephen’s post goes on to point out, the case for impact factors being useful for judging individuals is even more ludicrous.

None of this stuff is new, but this is perhaps the first time that I’ve seen at close hand the human cost of inane bibliometrics. I don’t think my friends are rare exceptions – this cancer is well and truly entrenched.

But as the nights are drawing in and a damp chill settles down over dark London streets, I don’t want to end this post on a negative note. With all this impact factor context in mind, you can imagine my amusement when I received an email from Amazon this morning, telling me that they had come up with a new metric to rank their authors based on subgenre. So I was tickled to discover the following:

RomanticSuspense

Hot damn, I’m officially a Romantic Suspense novelist! I don’t know about you, but considering how many books are for sale overall on this planet, that number looks pretty damned good to me. I may not have a Nature paper yet, but this will do nicely in the meantime.

Post-script: If you’re a scientist getting ground down by the constant harmful vibes of being measured by inappropriate numbers, why not consider escaping into a little bit of “romantic suspense” yourself? The Honest Look and Experimental Heart, my two novels about scientists, are available on Amazon. The first is more literary, the second more hard-core geeky, but both promise chills, thrills, steaming test tubes… and satisfying lashings of lab skulduggery.

About Jennifer Rohn

Scientist, novelist, rock chick
This entry was posted in LabLit, Scientific papers, The profession of science, Writing. Bookmark the permalink.

20 Responses to In which numbers lie – except when they flatter us

  1. Cromercrox says:

    Did Timothy go back to the person who dissed his research and challenge them?
    Oh, I’m #2629 in ‘Mystery’, by the way.

  2. Timothy’s eminent professor had a quiet word with the disser and said disser, to her credit, did apologize to Timothy. Which I guess is something.

    oooo….I’m sure there are far more Mystery books than Romantic Suspense books…respek!

  3. rpg says:

    I’m going to start writing in very small genres…

  4. Diervilla says:

    In my large northern university similar things are happening. Luckily (?) we are paying for eminent external people to rate our potential submissions, which help alleviate some of the more inane metrics based decisions.

  5. Hey, I might be number 1 in LabLit! Now there’s a thought. Just need to hack into Amazon and make it a category.

  6. Diervilla, from what I’ve heard, it’s happening in every university. People seem to have lost the ability to analyze papers any other way. The problem is that it takes effort, and not every paper is in your field, and people really seem just to want to do the quick and dirty triage. I think the whole “star” thing is just a way for people to claim that they’re not using “impact factors”. But it’s just the same thing in different clothing.

  7. cromercrox says:

    Yay! Yesterday I was #923 in Science Fiction! Break out that Ol’ Janx Spirit!!

  8. I’d like to say not everyone operates this way and my own department’s REF panel certainly isn’t using impact factor as a criterion (I chair it so I can say that with confidence). It is pretty daft to use it (or any other crude metric else) slavishly and it is, as you say, really just laziness. As a member of the last RAE physics panel I can also say that some of us got very fed up with reading really boring papers in the ‘top’ journals. People just assumed if it was in a 4* journal it must be a 4* paper – and they were wrong. But that doesn’t mean that the sort of behaviour you’ve come up against isn’t quite pervasive, because it clearly is.. Just not absolutely everywhere!

  9. Glad to hear it, Athene. I think the REF panels themselves would find it difficult to try to use IFs exclusively, since they’ve been so explicitly banned, but what worries me is lower down: the leads responsible for nominating papers in their departments/divisions. It’s these folks that may be using the convenient shortcuts, so that good papers never make it onto the final committee’s table.

    OK, so now I’m curious. There MUST be an underground “star” list of ranked journals somewhere, since you say “People just assumed if it was in a 4* journal it must be a 4* paper”. I would dearly love to get my hand on that list. Does anyone know where it is or how ordinary people can have a look?

    And if it doesn’t exist, how do people know the journals are 4* – or not?

  10. I think this sort of thing is much easier in biomedical sciences, where there’s an established culture of star journals and impact factors. For example, people won’t be submitted for the REF unless they’ve published in BMJ/Lancet/etc. It’s far more difficult to pull that sort of nonsense in Computer Science, where outlet quality is well known and not really measured by Impact Factor.

  11. Richard Van Noorden says:

    Hi Jennifer,

    Disturbing story. As you hint in the comments, is this really a story about impact factors and bibliometrics? Isn’t it really a story about how evaluators have mental notes of the ‘key’ journals? This is the tricky point about HEFCE’s insistence that impact factors won’t be used in the REF. Sure, maybe not numeric impact factors. But how can you stop someone looking at a list of papers and saying – ‘that’s a good journal, that’s a journal I’ve never heard of’. Impact factors needn’t come into it: we’re talking about brands and prestige here. Impact factors could disappear tomorrow and this culture would still exist.

  12. Stephen Moss says:

    Like Athene we also have a REF panel taking a sensible view of publications. In ophthalmology, which is a specialised field, the leading journal is Investigative Ophthalmology and Visual Sciences (IOVS). Our REF coordinators are quite OK with academics submitting IOVS papers, despite its IF of ~4. As many have noted, the real problem seems to arise at the level of senior management, where in some universities and departments IF is mistakenly used as an indicator of quality.

  13. zinemin says:

    The Science/Nature thing is just so strange. In my field, most senior people would say that the majority of Science/Nature papers about their topic are either wrong or insignificant, and frown upon people who try to publish there. This discourages most young people from even trying.
    But Universities of course love Science/Nature and award professorships based on such publications. So the few ones who continue to try to get something out in Nature have a good chance of being rewarded in the long run, but of course they are not well liked in the field.
    It is pretty absurd.

  14. Jim Wild says:

    Nice post – pretty much sums up some of the weaknesses in the system. If you sit down and work out how many outputs (papers) each REF panel member is going to have to read and assess, then it does beg the question of how they will manage if they don’t use journal impact factor, citation metrics or plain-old reputation?

    A mischievous commentator might also note that we should all guard against unconscious bias and preconceptions. What did we each take away from the fact that Anna was at a “Russell Group University”? Does that really relate to her personal level of research excellence? 😉

  15. How it relates is that I was trying to convey that her particular university was a highly competitive environment, without giving away where she actually is. The fact that she got a fellowship there is apposite to her excellence as a candidate for the job.

  16. Jenny
    In my field there is no explicit 4* list, but people talk as if Physics Review Letters (PRL) epitomised what is excellent. I am proud to say I have never published in it, because – for all it’s a top journal in the larger field I work in – it isn’t really very relevant to my own sub-field. There is also a very specific style of paper that they require, and my work doesn’t fit that either. So, in my younger years it bothered me not to publish there; now I recognize better what a mirage it is and I don’t care. I have heard people say outrageous things about it – if so and so hasn’t published in it they couldn’t possibly be appointed to a post, fellowship whatever, and I take delight in squashing the idea. It is an uphill struggle, but people do respond when challenged. Maybe the biomedical sciences are even more hung up on some of this stuff. (No one has mentioned the dreaded h index, which is equally pernicious because very sub-field dependent).

  17. rpg says:

    Y’know Athene, when first I became aware of the whole impact factor phenomenon, and related issues such as open access, I was a little bit sceptical because it seemed to me that those making the most fuss were those scientists who, frankly, weren’t that much cop. The people I trusted and admired never seemed to pay it any thought.

    So I’m pleased to see people such as yourself say things like that.

  18. Jonathan says:

    The notion of impact factors has never made any sense to me, although the brand of a journal is useful as a short cut for knowing where to look for interesting research papers. Journal quality ought to relate in part to the strictness of the refereeing process, which would weed out all but the most interesting/novel papers. I’ve seldom noticed much difference in the refereeing quality between journals in my field (astrophysics), which comes down more to the referees you happen to get than to the journal itself.

    In my own refereeing the only journal I treat very differently is Physical Review Letters (PRL), which Athene mentioned. That’s partly because they’ve sent out emails explicitly trying to raise the level of reviewing and partly because of the format. In 4 pages you really should be saying something genuinely original, because you’re unlikely to be saying something substantial. Personally I hate the 4 page format, but it works for some short ideas.

  19. Steve Caplan says:

    Congratulations on the high Amazon ranking! I’m at #4775 on the literary fiction ranking, but given that it is supposed to change hourly (and I apparently sold a few books yesterday), I may hit rock bottom tomorrow…

Comments are closed.