Lies, Damned Lies – and Rank-ordered Lists

The end of the year is traditionally the time for lists remembering the year just gone: lists of those who died (and, for so-called stars, their marriages and divorces too), events that happened, films that bombed or triumphed, public gaffes or more celebrated quotes. There will also be lists of people’s predictions for the years ahead – whether these be serious or astrological. Lists are a way of organising our lives, our thoughts, and for many of us a way of staying on top of the chaos around us.  A quick search through the newspapers came up with these entertaining titles:

and the inevitable

Furthermore, today’s latest (web)list of the Top 13 women in Technology, features fellow Occcam’s Typewriter blogger Jenny Rohn.

I suppose I approach these lists with a rather more personal view this year. I was involved with the production of the Times’ Eureka list of the 100 Most Influential People in British Science earlier this autumn – this was a rank-ordered list.  Deeply dubious about the validity of the whole process, my thoughts on this process were written up previously here. (Fellow judge Alice Bell’s thoughts on the meaning of influence can be found here, and the actual list – away from the Times’ paywall – can be found here.)  Subsequently and even more absurdly, I found myself appearing on the Telegraph’s list of the 100 Most Powerful Women in Britain. This list (which at least had the virtue of only categorizing us and not attempting to draw up a ranking), where it dealt with scientists, had little in common with the women who appeared in the Eureka list, which in itself identifies just how absurd the process is (unless you think the lists ought to be subtly different, distinguishing influence from power).  Mary Beard and I exchanged views, as we wriggled uncomfortably about our appearance in this list in a newspaper neither of us would read under normal circumstances, trying to decide if we were more flattered or annoyed (we concluded we were both). So, from both sides of the judging divide, I think I can confirm the suspicions people are likely to have of these lists – they are frequently little more than froth designed to waste readers’ time (and extract their money for the media moguls’ pockets) when they should be doing something more useful.

Unfortunately, our world is full of lists that really do matter. Not the ones that appear in the media during the fallow season at the end of a year, but the ones that decree where funding goes, for instance. These include those lists the research councils produce to decide on grants or fellowships, and also the recent lists produced within Government to decide which quangos go, which departments get what swingeing level of cuts. In all cases this is a case of deciding the distribution of pain.  So, if the ones in the newspaper are quasi-random, with little hard evidence informing the inclusion of one person and the exclusion of another, are these painful lists any better?  Having sat on enough grant- and fellowship-giving panels, it is absolutely clear to me that people try to base decisions on evidence, but it is still hard to come away without a suspicion that it is impossible to form a truly objective view, which gives appropriate weight to appropriate information.  There are many reasons for this.

Leaving aside any personally vindictive refereeing (I would actually suspect this is quite rare), there are bound to be referees who have strong views which clash with the proposal they are sent to referee. I recall a particular case from years ago when Dr X submitted a series of grants that were always torn to shreds by Dr Y, and always for the same reason. The research council – or indeed Dr X – should have realized this was a hopeless situation much sooner than they did. I think Dr X could have requested (they must have known the identity of Dr Y) that this particular referee not be used – something research councils allow -, or the research council simply chosen not to use Dr Y. As I recall, the ultimate outcome was a suggestion from the panel that Dr X and Dr Y get together to put in a joint proposal to resolve the argument, which was indeed a fundamental and important one that would otherwise constantly be tossed around.  This was only possible because it was a research council with a standing committee (as opposed to the EPSRC’s current system of ad hoc committees, where the longevity of the argument would be missed).

A second problem is what happens to grants, or fellowship applications, which are not mainstream or which are inter- or multi-disciplinary.  In the latter case, research councils will tell you they share the names of referees, make sure that the referees cover the full spectrum of the proposal, and I don’t doubt this is what they intend to achieve. Unfortunately this leaves the grant at the mercy of referees who may say (for instance of a biological physics grant) that the physics is mundane and they can’t comment on the biology (this from the physicist approached) coupled with a biologist remarking that the biology is not cutting edge and they don’t understand the physics; neither may be capable of judging the originality of the proposal simply because it joins the two strands together to produce something novel and adventurous.  Both comments will in essence damn the proposal.  How can a panel member judge this grant fairly despite everyone’s best intentions? Even, in a less extreme example, if one proposal sits squarely at the centre of a field – so that each panel member feels they have some idea of what it’s about and why it matters – and another, with equally good referee reports, is in a less familiar corner of the discipline, how can the panel not collectively feel more comfortable with the former and rank it higher? With the best of intentions, I have seen panels agonise over this and still vote for the familiar versus the strange. They may be right so to do, but it is nevertheless a decision based not solely on merit.

The third problem I would identify is timing of incoming referees’ reports. Anyone who has ever submitted a grant must be familiar with the experience of receiving 2 or 3 favourable reviews, and feeling justifiably smug, only to receive a late and damning response.  The problem is that if it is the last report which is negative, there is the danger that the panel will only receive the response to it after the bulk of individual preparation for the panel has been done, or worse as a tabled paper. Even if the referee is out of their depth and talking nonsense (yes, sometimes referees do talk nonsense – which is of course a separate problem, usually because they have refereed at speed and carelessly), the chances are the panel won’t have time to do their homework, read the response thoroughly or check a reference from the rebuttal. They will, however, be left with a lingering feeling of doubt and nervousness.  The grant is now damned by timing as much as by ineptitude or lack of originality in the original submission.

At the end of the day, even if peer review is the least bad option, I fear the rank ordering of lists that panels are required to do end up with rough justice due to all these factors. Typically it is clear which applications should sit right at the top, or right at the bottom. But for the rest many factors come into play. Panels try their hardest. I have never seen a grant-giving panel take their responsibilities with anything less than seriousness, never as chair had to take a group to task for not concentrating hard on the business in hand, but that doesn’t mean we get things right. Panels end up being conservative. Despite admonitions that risk is to be encouraged, this can only be up some notional and invisible point and feasibility studies are always encouraged. Members may be unconsciously swayed by factors they should be ignoring (‘I met that referee once and (s)he’s a Good Guy –  or a Heavyweight – so (s)he must be right in their views’, despite the fact their view contradicts other referees; ‘I don’t think much of that department’ so they mark down a new investigator there; graphene is the up-and-coming material, so that grant must be better than one studying gallium arsenide…. one could draw up a list in itself of unconscious bias in refereeing!  Unconscious bias can turn up in many guises.).  There is always the danger, when the chips are down and difficult decisions have to be made, that a throwaway and not-well-thought-through remark from one panel member can seem like a lifeline to others to absolve them from facing up to a difficult choice.

So, just as I said about the Eureka list, there is no single right answer. The difference is that people’s futures hang on the outcomes of grant panels in a way no media list I am likely to encounter can do. As the funding gets tighter, and yet tighter again, the element of lottery in these lists becomes more damaging. The stated intentions of some research councils to ‘concentrate’ their grant money means that earlier decisions will be reinforced in the future, so that having lost out once researchers may be at more risk of losing out again. I don’t know what the answer is, but I fear these lists are going to be hanging over us for a long time to come.

This entry was posted in Biological Physics, Interdisciplinary Science, Science Culture, Science Funding and tagged , , . Bookmark the permalink.

8 Responses to Lies, Damned Lies – and Rank-ordered Lists

  1. rpg says:

    Very, very nice post, Athene.

    (Your biology/physics example is pertinent to me. Took us about two years to get one particular paper published for those very reasons.)

  2. K says:

    I had four (4!) proposals rejected by EPSRC this year, all just below or 2 below the funding cutoff. One had five referee reports: four really thorough and good while one was damning. The last proposal had three referees giving the proposal a mark of 6 or 5 (but almost 6 in comments) and a fourth referee gave it a 1. I hate lists!!

  3. The second problem you identified is spot on. I work in speech technology and computational linguistics, two fields that by definition span boundaries. Since EPSRC’s specialised panels were dissolved in favour of more frequent sessions, it has been incredibly hard to get funding for the kind of work we do, because we’re not classical computer scientists. These days, the EPSRC ICT panels are incredibly diverse, and the panelists don’t necessarily get to report on grants from a field they know well.

    Often, it’s the creative melding of two different research traditions that creates innovation, but the substantial legwork involved in bringing both together looks tedious and uninnovative to the fields involved. I’ve seen this first hand when trying to get funding for my work on assessing the effect of cognitive function on how people interact with voice interfaces to computer systems. This is a huge unexplored area, especially if you want to get both the cognitive science and the voice interface design right in your study. The ground work didn’t involve any sexy new paradigms in either field – it was about looking at what aspects of cognitive function affect how easily users can achieve their goals with a given interface. But without this stock tacking, we wouldn’t have known where to look further.

  4. steve caplan says:

    Athene,

    That’s a very interesting commentary. Parallel to the description you provide for the UK, the “lists” for funding are becoming impossible to manage in the US. The first response here was to change the grant system, making the key NIH grant for most researchers (known as the RO1) 13 pages instead of 25. A key reason for this was that the system (mistakenly) believed that it was hard to obtain good reviewers because of the ‘reading burden’. Of course the real reason is that it is so depressing to review 10 grants and know that only 1 (or none) of those are likely to be funded. The ambiguity of funding a 9th percentile grant but not an 11th percentile grant is a terrible thing, and exactly what you describe.

    With regard to interdisciplinary studies, my limited experience thus far is that there is a little more understanding in the US. There is a real campaign to align multidicsiplinary groups, and again, at least in my own case, there was sympathy for a proposal that combined NMR-based structural studies and more classical cell biology. It seems clear from the reviews that the respective experts were appreciative of the overall aim. While this is a specific personal example, based on conversations with other US scientists I think at least in this issue the US system seems better equipped to deal with it.

    • Steve, from what I’ve heard the US does indeed seem to be doing a better job of interdisciplinary science currently. I believe the NSF runs a specific programme (if that is the correct word in this context) for Physics of Living Systems – something the UK still sadly lacks but is highly relevant to my own research field – as well as (much more generally) having an Office for Multidisciplinary Activities. Something I didn’t mention in my post was the danger of grants falling between research councils over here, which seems to happen at a range of interfaces, and the NSF can probably cope with that better too (though there may still be an interface problem between NSF and NIH).

      However little more than a decade ago this certainly wasn’t the case, with the US seeming to be very compartamentalised. I recall a conversation with Steven Chu, I think only a few months before he won the Nobel Prize, when he was bemoaning the difficulty he had obtaining funding for his current work on DNA motion. Apparently the NIH wouldn’t take this research seriously, beautiful though it was, and he had to do the work under funding from NSF allegedly for cold atoms, because that was what he had a track record for. At that point he looked enviously at the way the UK was more broad-minded; this was at a time when I could get funding for food physics from the BBSRC (Biotechnology and Biological Sciences Research Council) for this despite being a physicist. (And, I should add, I still in principle can, it’s just it’s got much harder for all the reasons I wrote about).

      • steve caplan says:

        Athene,

        Despite the apparent better treatment of multidisciplinary research (at least at present) in the US, unfortunately there is an overall feeling by basic researchers– be they physicists or cell biologists–that there is a significant loss of interest in funding “basic” “non-disease-oriented” research. But I think I will save that topic for an upcoming blog, as it deserves a space of its own, unfortunately…

  5. Great admiration and much head-nodding here.

    “…neither may be capable of judging the originality of the proposal simply because it joins the two strands together to produce something novel and adventurous. Both comments will in essence damn the proposal.”

    I wish you were not so right. I appreciate your giving some marks for good intentions but still a scathing overall grade.

Comments are closed.