You’ll have heard the story about women whingeing about how their proudly-submitted papers got rejected by a premier journal without being sent out to referees. Or that the comments they received from referees were unduly harsh, but a male colleague’s paper got through on the nod without multiple resubmissions Just a bunch of females having a moan wasn’t it because they can’t hack it? Well, no. Turns out they (we) were right. Gender bias exists in the editorial and refereeing process according to the evidence the Royal Society of Chemistry (RSC) has just published regarding its own journals.
Melinda Duer and I drew attention to the problems in publishing when it comes to women in an article published in Times Higher Education this spring, an article quoted in the RSC report and possibly one of its spurs. We noted that, whereas there was some data regarding a ‘higher bar’ for publishing for women in economics, journals had so far not looked in depth (as far as we could tell) at the issues in the various science disciplines. I am delighted that the RSC has now produced evidence from their own suite of journals highlighting (amongst other hard-hitting points) that, as we suspected:
- Biases exist at each step of the publishing profile. Many of these biases appear minor in isolation, yet their combined effect puts women at a significant disadvantage.
- Women are less likely than men to submit to journals with higher impact factors, and they are also more likely to have an article rejected without review.
- Biases operate at editorial level too. The choice of reviewer and editorial agreement with a review are influenced by gender.
Or rather, I’m not delighted these biases exist, but I am pleased that hard data about the whole process is finding its way into the public domain. The RSC is calling – as we did previously – for publishers to look harder at their own data to identify if similar problems are found in their journals too.
Does this matter? Of course it does. When people wring their hands and enquire why there are fewer women reaching the top of the research ladder little attention has been paid to the role of journals. Publication metrics matter. They matter a great deal when it comes to appointment and promotion, for instance. The infographic below – one of a range of related figures that can be found accompanying the report – shows the steady decline in numbers of women along the publication pipeline.
If women are losing out at each stage of the process an additional hurdle is introduced into their careers which will make their life significantly harder on average than for a corresponding man. This isn’t exactly a case of ‘male by default’ at appointment, but it is certainly a case of an uneven playing field which will hinder women moving up the ladder. There comes a point, following setback after setback in getting hard won results published, that a woman may simply feel the game is not worth the candle and quit academia regardless of the quality of her work. This is not good for research. It ceases to be the case that excellence inevitably wins out. We are all the losers, the public as well as the research community.
I was not able to attend the launch of the report, but I understand that various men spoke up about the fact the evidence shows how the current system fails many. I believe Tom Welton went so far as to say something about how the bullies win out, or words to that effect. Too true. Our current system contains perverse incentives and ‘publish or perish’ seems to be one of them. We need to do better. The University of Cambridge’s 2014 book The Meaning of Success was meant to kickstart a discussion about what we should be valuing in our universities. It failed. For a little while people talked enthusiastically about how we should broaden criteria to take into account other factors, such as mentoring, investing time in training PhD students properly, leading on Athena Swan applications or dutifully fulfilling departmental housekeeping roles. And then the world slipped back into the bigger (or more) is better mode of operation.
The bias introduced by our current systems are highlighted in the recent and not-entirely-satisfactorily-detailed response from UKRI to the request from the Science and Technology Select Committee’s inquiry into the impact of science funding policy on equality, diversity, inclusion and accessibility. The evidence UKRI submitted shows that, for instance, success rates for women are about 2% less than for men across the years reported and they typically apply for – and are awarded smaller grants than men. When it comes to ethnicity, there was a shocking 10% difference in success rates between white PIs and ethnic minorities for 2018/19. It isn’t surprising, given this evidence, that the number of black professors in the UK is so dismally low. It appears to have taken the hard work of Rachel Oliver (Professor of Materials Science here in Cambridge) and her colleagues to pitch successfully for this topic to be a subject that the Select Committee would explore, to lead to the request from Norman Lamb to UKRI for even this fairly limited data to be released.
It is easy to spout that academia is all about excellence, but in the face of the mounting evidence that bias lurks in many disparate places it is less easy to believe in the truth of the statement. It is more than time that all parts of the academic landscape enumerate the many different places that bias might creep in, collect evidence as to whether it does or does not, and then do something about it. Sometimes it will be within the host department or institution: who supports ECRs writing their first grants? What happens when it comes to internal sifts? Is mentoring always on offer? It might be at a research council panel where bias can (and based on the UKRI data presumably does) creep into evaluation. It could be in the publication part of the ecosystem, as the RSC report highlights. Or it could be in the letters of reference, where gendered language is by now well recognized.
Congratulations to the RSC for their report. Now over to other publishers to scrutinize more than simply how many women they have in their pool of referees and other basic but limited metrics.