‘Unconscious bias’ has become very much part of the conscious process that many organisations try to bring to bear on their decision-making, be it with regard to promotions or appointments. However, what do they mean by it and how do they go about it? Often the advisory notes (or equivalent) don’t go beyond reminding everyone they should think about the issues and be aware they may impact on outcomes if each and every member of the panel doesn’t focus on it. The trouble is, unconscious bias turns up not just in the individuals involved in the decisions, but also in many of the component parts of the paperwork that need to be deconstructed if a panel is to recognize how hidden influences may play a part. I was reminded of this as I was putting together my submission to the recent call for evidence from the Commons Science and Technology Select Committee on Diversity in STEM, while simultaneously observing various committees engaged in decision-making.
One of the more comprehensive videos I’ve seen, in that it highlights a number of different but specific traps we may all fall into, is this one the ERC uses to make its panel members aware (I believe it is still current for them; it is certainly still on their Working Group for Gender Issues webpages ). When I first watched it, I realised that ‘affinity bias’ was not a form of unconscious bias I had myself previously taken on board. Finding something in common with an applicant for a job had previously only meant to me that it was easier to get someone talking, not that having a shared mutual interest or background experiences in common might influence my decision. I should have realised; we hear enough about old Etonians ending up with Government jobs, but I had not extended it to someone sharing a common taste in music or something more obscure. The link I give above to affinity bias refers to five sorts of bias. Googling the topic, however, points to sites describing variously three to sixteen different types of bias. There are plenty of pitfalls in recruitment.
However, I want to remind people of the biases that other people will be introducing into the paperwork, which don’t get discussed enough in these situations. These may turn up in different parts of an application, ranging from the publication list to letters of reference – as well as to other figures of merit which a panel may choose to search out. The danger of letters of reference being gendered is becoming much more widely appreciated, with sites devoted to helping you identify where words may subtly be conveying things other than what you intended. I’ve discussed this topic previously on this blog. This website from Arizona State University highlights the importance of emphasising accomplishments not effort. Hard-working is not exactly praise if applying for an academic position, even though it could hardly be described as inherently negative. Tom Forth, a data scientist, has produced a site where you can even insert your draft letter and see just what kinds of words you’ve introduced – to check if the implications are what you meant.
This topic, as I say, is by now quite well known (although that doesn’t mean it’s always dealt with appropriately). I am increasingly worried by the less obvious concerns. Let me start with double standards (neatly summed up in the Goethe quote I wrote about before
‘Girls we love for what they are; young men for what they promise to be.’).
It isn’t always obvious unless you’re concentrating really hard. I watched a committee debate whether a senior woman who had many last author placements in author lists was little more than the purse holder, the implication being she should have been first-named author. This seemed bizarre, as the first author place is usually reserved for the PhD student/postdoc who’s done the actual experiments much more than the initiator of experiments. I didn’t hear the same comment raised about any of the men under consideration. Was that bias? I fear it was, but cannot prove it.
What about the fact that women’s papers are less cited? Many committee members will delve into Web of Science or Google Scholar to analyse citations (‘this person has even fewer than me’, was an unhelpful comment I heard recently). Leaving aside the fact that sub(-)disciplines vary hugely in their practices of both publication and citation, there are well-recognized gender differences in citation. A recent study of citations of papers published in elite Medical journals found a very significant difference. As the Nature commentary on the paper said
‘papers with women as primary authors had one-third fewer median citations than did those with men as primary authors, and that papers with women as senior authors received about one-quarter fewer citations than did those with men as senior authors. And papers whose primary and senior authors were both female received just half as many citations as did papers whose primary and senior authors were both male.’
This indicates that checking on citations is not the objective metric evaluators may imagine. (Women are also less likely to cite their own papers than men.)
Then there is the problem women face in even getting their papers published in the first instance. Melinda Duer and I highlighted this as a potential issue a few years back, based on the experience of women we knew and work analysing practice in other non-scientific disciplines (notably economics). The work of the Royal Society of Chemistry analysing their own publications, in part stimulated by our article, demonstrated that at every stage from submission on, women were indeed disadvantaged. As the report put it
‘Biases exist at each step of the publishing profile. Many of these biases appear minor in isolation, yet their combined effect puts women at a significant disadvantage.’
Another nail in the coffin of using publication metrics as a valid quantitative criterion in making decisions. But how many panels actually bear this in mind as they compare Dr Joe Smith and Dr Jo Smith?
I worry that these additional hurdles introduced due to bias creeping into the development of a career are not yet feeding into decision-making committees. And this is just about gender. Studies on how CVs are read have demonstrated just how much the name at the top affects how people read it, and this has been shown to be true for ethnic minority ‘names’ in the same way as for female ‘names’, although there are variations according to the precise apparent heritage (see this paper for instance, or here). How the other sorts of disadvantage impact I describe here on ethnicity will need further study. However, in the UK there is no doubt that ethnic minorities do worse in grant allocation, compounded by intersectionality, as UKRI statistics make plain.
There is still a long way to go in ensuring grant-awarding panels and appointment and promotion committees really do make the best decisions, and that all members are sensitive to a multitude of possible sources of bias. Let us hope these wider aspects filter through as fast as possible if equity is to be achieved.