The end of the year is traditionally the time for lists remembering the year just gone: lists of those who died (and, for so-called stars, their marriages and divorces too), events that happened, films that bombed or triumphed, public gaffes or more celebrated quotes. There will also be lists of people’s predictions for the years ahead – whether these be serious or astrological. Lists are a way of organising our lives, our thoughts, and for many of us a way of staying on top of the chaos around us. A quick search through the newspapers came up with these entertaining titles:
- The IoS ‘Smug List of 2010’ (starting off with bankers)
- The Guardian’s ‘20 things we learned in 2010’ (only 20?)
- The Times Images from the Front Line of Life (well worth a look if you can get behind the paywall)
and the inevitable
- The best dressed women of the year, from the Telegraph.
I suppose I approach these lists with a rather more personal view this year. I was involved with the production of the Times’ Eureka list of the 100 Most Influential People in British Science earlier this autumn – this was a rank-ordered list. Deeply dubious about the validity of the whole process, my thoughts on this process were written up previously here. (Fellow judge Alice Bell’s thoughts on the meaning of influence can be found here, and the actual list – away from the Times’ paywall – can be found here.) Subsequently and even more absurdly, I found myself appearing on the Telegraph’s list of the 100 Most Powerful Women in Britain. This list (which at least had the virtue of only categorizing us and not attempting to draw up a ranking), where it dealt with scientists, had little in common with the women who appeared in the Eureka list, which in itself identifies just how absurd the process is (unless you think the lists ought to be subtly different, distinguishing influence from power). Mary Beard and I exchanged views, as we wriggled uncomfortably about our appearance in this list in a newspaper neither of us would read under normal circumstances, trying to decide if we were more flattered or annoyed (we concluded we were both). So, from both sides of the judging divide, I think I can confirm the suspicions people are likely to have of these lists – they are frequently little more than froth designed to waste readers’ time (and extract their money for the media moguls’ pockets) when they should be doing something more useful.
Unfortunately, our world is full of lists that really do matter. Not the ones that appear in the media during the fallow season at the end of a year, but the ones that decree where funding goes, for instance. These include those lists the research councils produce to decide on grants or fellowships, and also the recent lists produced within Government to decide which quangos go, which departments get what swingeing level of cuts. In all cases this is a case of deciding the distribution of pain. So, if the ones in the newspaper are quasi-random, with little hard evidence informing the inclusion of one person and the exclusion of another, are these painful lists any better? Having sat on enough grant- and fellowship-giving panels, it is absolutely clear to me that people try to base decisions on evidence, but it is still hard to come away without a suspicion that it is impossible to form a truly objective view, which gives appropriate weight to appropriate information. There are many reasons for this.
Leaving aside any personally vindictive refereeing (I would actually suspect this is quite rare), there are bound to be referees who have strong views which clash with the proposal they are sent to referee. I recall a particular case from years ago when Dr X submitted a series of grants that were always torn to shreds by Dr Y, and always for the same reason. The research council – or indeed Dr X – should have realized this was a hopeless situation much sooner than they did. I think Dr X could have requested (they must have known the identity of Dr Y) that this particular referee not be used – something research councils allow -, or the research council simply chosen not to use Dr Y. As I recall, the ultimate outcome was a suggestion from the panel that Dr X and Dr Y get together to put in a joint proposal to resolve the argument, which was indeed a fundamental and important one that would otherwise constantly be tossed around. This was only possible because it was a research council with a standing committee (as opposed to the EPSRC’s current system of ad hoc committees, where the longevity of the argument would be missed).
A second problem is what happens to grants, or fellowship applications, which are not mainstream or which are inter- or multi-disciplinary. In the latter case, research councils will tell you they share the names of referees, make sure that the referees cover the full spectrum of the proposal, and I don’t doubt this is what they intend to achieve. Unfortunately this leaves the grant at the mercy of referees who may say (for instance of a biological physics grant) that the physics is mundane and they can’t comment on the biology (this from the physicist approached) coupled with a biologist remarking that the biology is not cutting edge and they don’t understand the physics; neither may be capable of judging the originality of the proposal simply because it joins the two strands together to produce something novel and adventurous. Both comments will in essence damn the proposal. How can a panel member judge this grant fairly despite everyone’s best intentions? Even, in a less extreme example, if one proposal sits squarely at the centre of a field – so that each panel member feels they have some idea of what it’s about and why it matters – and another, with equally good referee reports, is in a less familiar corner of the discipline, how can the panel not collectively feel more comfortable with the former and rank it higher? With the best of intentions, I have seen panels agonise over this and still vote for the familiar versus the strange. They may be right so to do, but it is nevertheless a decision based not solely on merit.
The third problem I would identify is timing of incoming referees’ reports. Anyone who has ever submitted a grant must be familiar with the experience of receiving 2 or 3 favourable reviews, and feeling justifiably smug, only to receive a late and damning response. The problem is that if it is the last report which is negative, there is the danger that the panel will only receive the response to it after the bulk of individual preparation for the panel has been done, or worse as a tabled paper. Even if the referee is out of their depth and talking nonsense (yes, sometimes referees do talk nonsense – which is of course a separate problem, usually because they have refereed at speed and carelessly), the chances are the panel won’t have time to do their homework, read the response thoroughly or check a reference from the rebuttal. They will, however, be left with a lingering feeling of doubt and nervousness. The grant is now damned by timing as much as by ineptitude or lack of originality in the original submission.
At the end of the day, even if peer review is the least bad option, I fear the rank ordering of lists that panels are required to do end up with rough justice due to all these factors. Typically it is clear which applications should sit right at the top, or right at the bottom. But for the rest many factors come into play. Panels try their hardest. I have never seen a grant-giving panel take their responsibilities with anything less than seriousness, never as chair had to take a group to task for not concentrating hard on the business in hand, but that doesn’t mean we get things right. Panels end up being conservative. Despite admonitions that risk is to be encouraged, this can only be up some notional and invisible point and feasibility studies are always encouraged. Members may be unconsciously swayed by factors they should be ignoring (‘I met that referee once and (s)he’s a Good Guy – or a Heavyweight – so (s)he must be right in their views’, despite the fact their view contradicts other referees; ‘I don’t think much of that department’ so they mark down a new investigator there; graphene is the up-and-coming material, so that grant must be better than one studying gallium arsenide…. one could draw up a list in itself of unconscious bias in refereeing! Unconscious bias can turn up in many guises.). There is always the danger, when the chips are down and difficult decisions have to be made, that a throwaway and not-well-thought-through remark from one panel member can seem like a lifeline to others to absolve them from facing up to a difficult choice.
So, just as I said about the Eureka list, there is no single right answer. The difference is that people’s futures hang on the outcomes of grant panels in a way no media list I am likely to encounter can do. As the funding gets tighter, and yet tighter again, the element of lottery in these lists becomes more damaging. The stated intentions of some research councils to ‘concentrate’ their grant money means that earlier decisions will be reinforced in the future, so that having lost out once researchers may be at more risk of losing out again. I don’t know what the answer is, but I fear these lists are going to be hanging over us for a long time to come.