DORA, the Leiden Manifesto & a university’s right to choose: a comment

The post below was written as a comment on Lizzie Gadd’s recent post explaining in some detail Loughborough University decision to base their approach to research assessment more on the Leiden Manifesto than DORA, the Declaration on Research Assessment. So you should read that first! (The comment is currently ‘in moderation’ because, like myself, Lizzie is on holiday. I suspect she is more disciplined that I am at not looking at her email whilst on holiday. I’ll update this post once the comment is approved.

Update (18-07-18, 15:30) – the comment has now been approved. I suggest any further discussion takes place beneath Lizzie’s original post.

Even though I am currently chair of the DORA steering committee, I don’t want to get into ‘theological’ arguments about the differences between DORA and the Leiden Manifesto because they are both forces for good! Moreover, I am sure that Lizzie agrees with me that ultimately it is the development of good research assessment practices that matter and I again applaud the work that she has been doing on that front at Loughborough.

Nevertheless, I want to argue for a more expansive interpretation of what DORA means (as a declaration and an organisation) than is presented here.

It is perfectly true that DORA was born in 2012 but it would not be correct to suppose that the declarationis any more fixed in time than the Leiden Manifesto. Although the first and most prominent recommendation of the declaration is stated negatively (“Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”), the remaining 17 recommendations are almost invariably positive, encouraging adherents to think about and create good practice. It does not limit how they should do that. The declaration is not very long so I would encourage everyone to read it in full.

Nor should it be supposed that DORA’s relevance is confined to the sciences; it has always aimedto be “a worldwide initiative covering all scholarly disciplines”. Admittedly, the work to extend that coverage has been lacking, but as the recently published roadmapmakes clear, we have now placed a particular emphasis on extending DORA’s disciplinary and geographical reach. We are in the process of assembling an international advisory board from all corners of the globe. We would be glad to hear from arts, humanities and social science scholars about how DORA can help them to promote responsible research assessment in their fields.

Lizzie draws a careful distinction between not using journal metrics to assess the quality of research outputs and using them to assess ‘visibility’. She is right to do so because there is a risk it might send a subliminal message to researchers. It would be interesting to hear from Loughborough’s researchers how they interpret the guidance on these points.

This distinction is the basis of Lizzie’s argument that, because Loughborough wishes to incentivise it’s researchers to make their outputs more visible, they could not in good conscience sign DORA. I can see how that is an honest interpretation of the constraints of the declaration, but my own view is that it is too narrow. The preamble to the declarationlays out the argument for the need to improve the assessment of research ‘on its own merits’. This and the thrust imparted by the particular recommendations of the declaration show that it is the misuse of journal metrics in assessing research content – not its visibility – that is the heart of the matter. It seems to me that Loughborough’s responsible metrics policy is therefore not in contravention of either the letter or the spirit of DORA.

In the end, as Lizzie rightly states, it is Loughborough’s call and, again, I am sure that Lizzie and I have in common a strong desire to promote good research assessment practices. I stand by what I wrote back in 2016, in a piece bemoaning the fact that so few UK universities had yet to sign DORA:

“I would be happy for universities not to sign, as long as they are prepared to state their reasons publicly. They could explain, for instance, how their protocols for research assessment and for valuing their staff are superior to the minimal requirements of DORA. It’s the least we should expect of institutions that are ambitious to demonstrate real leadership.”


Posted in Academic publishing | Comments Off on DORA, the Leiden Manifesto & a university’s right to choose: a comment

Ready-made citation distributions are a boost for responsible research assessment

Though a long-time critic of journal impact factors (JIFs), I was delighted when the latest batch was released by Clarivate last week.

It’s not the JIFs themselves that I was glad to see (still alas quoted to a ridiculous level of precision). Rather it was the fact that Clarivate is now also making available the journal citation distributions on which they are based. This is a huge boost to the proposal made a couple of years ago by myself, Vincent Larivière and several prominent editors and publishers that citation distributions should be published by journals that advertise their JIFs.

Our proposal aimed to reduce the persistent and highly toxic influence of the unidimensional JIF on research assessment by giving authors and readers had a much richer picture of the real variation of citation performance within any given journal. It depended for its impact on journals following the recipe we provided in our preprint for generating the distributions from proprietary citation data in Web of Science or Scopus. Although a number of enlightened editors were quick to adopt the practice (e.g. at PNAS, Acta Cryst. A), it did not spread as far or as rapidly as we woudl have wished. However, now that the distributions are available ready-made from Clarivate*, there is no reason for any journal not to follow suit.

Citation Distribution

I hope that many will now opt to do so. Bianca Kramer (aka @MsPhelps) was quick off the mark publishing a couple of examples of citation distributions from well-known journals (see above). The enormous range of citation performance is immediately apparent.

Commendably, Clarivate have gone further and disaggregated research papers from reviews and other article types in the distributions. This is something the San Francisco Declaration on Research Assessment has long called for (see point 14 in the text of the declaration); it helps to further untangle the complexity underlying the JIF. Helpfully, Clarivate also now separately report the median citation rates of primary research articles and reviews. This reveals how highly-cited reviews inflate the indicator.

So I congratulate Clarivate on a positive move that provides the information needed for more responsible use of quantitative publication indicators. Of course, the goal of establishing robust research assessment processes that are free of the JIF has yet to be achieved. We need all journals that make any mention of their JIFs to also show Clarivate’s citation distributions. And we need researchers and research managers to start internalizing what they mean.

Even then there will be more to do. For one, Elsevier has recently made a big move into journal metrics with the publication of CiteScore, its alternative to the JIF; I hope that they too will start making the underlying citation distributions available.

I am not my h-index (or my JIFs)

Then there is the residual problem of that other unidimensional indicator, the h-index. In my view, any researcher who quotes their h-index should be expected to also show the underlying citation distribution. I have done this myself recently (using data gathered manually from Google Scholar – see above) as a way of showing that there are interesting and impactful stories to tell about my research papers irrespective of where they appear on the citation distribution.

Numbers are too powerful sometimes. In assessing any complex human activity like research, it is the stories, the narratives that provide the context essential to making an informed judgement.


*Important note about the use of Clarivate’s citation distributions: The initial press release didn’t mention the re-use rights are for the plots that subscribers can now find on Web of Science but Clarivate’s Marie McVeigh has now clarified this officially in the comment below. She confirms that journals can indeed publish these distributions (with attribution).

**For those familiar with R, Steve Royle has published a clever script that makes it easy to grab and h-index citation data from Google Scholar.

Posted in Academic publishing | 1 Comment

Opening peer review for inspection and improvement

ASAPbio Peer Review Meeting

For me the most memorable event at last week’s ASAPbio-HHMI-Wellcome meeting on Peer Review, which took place at HHMI’s beautifully appointed headquarters on the outskirts of Washington DC, was losing a $100 bet to Mike Eisen. Who would have guessed he’d know more than I did about the intergalactic space lord and UK parliamentary candidate, Lord Buckethead? Not me, it turned out.

Mike was gracious enough not to take my cash, though now I owe him multiple beers over the coming years. I doubt that the research community will have fixed peer review within that timeframe – there are no one-size-fits-all solutions to the multiple problems that were discussed at the meeting – but I did at least come away with a sense that some real improvements can be made in the near future. The gathering was a heady mixture of expertise, passion, ambition, blue-skies thinking and grey-skies reality checks.

I won’t attempt to summarise all of our deliberations. My brain no longer works that way and in any case the super-capable Hilda Bastian did it as we went along; (and there is also a brief report in Science). Hilda was one of several people I knew from the internet whom I met in the flesh for the first time in Washington. For all our technology, nothing can yet touch the levels of sympathetic engagement that comes from encountering your peers in the real world. It is still the best place for exchanging ideas — and for (ahem) exposing them to sobering correction.

After all was said and done, there were three ideas that I hope will endure and soon become more widely adopted. Each carries a modest cost but the benefits seems to me to be incontrovertible.

1. Open Peer Review: There are many definitions of ‘open peer review’ (at least 22 according to Tony Ross-Hellauer), but the one I have in mind here is the practice of publishing the reviewers’ anonymous comments, alongside the authors’ rebuttal, once a paper is finally accepted for publication. To my mind this degree of openness improves peer review because it incentivises professional conduct on the part of reviewers, thereby reducing the scope for sloppy work or personal attacks. It also makes the conduct of science significantly more transparent. While there was some concern that special interest groups operating in contentious areas of research like vaccines and climate science might derive ammunition by cherry-picking critical reviews, I am firmly of the view that the research community has to be ready to defend itself in the open. Closing the door on our proceedings and expecting the public to trust is will only fuel those who are already too quick to malign the scientific establishment as a conspiracy.

There are yet more benefits. Opening peer review helps to reveal the human and disputatious nature of progress in research. That will dispel the gleam if objective purity that sometimes clings, unhealthily, to the scientific enterprise. Being more open about how science works will build rather than undermine public trust. It will also help to burnish the reputation of journals that insist on rigorous peer review, and expose the so-called predatory journals that levy hefty article processing charges while providing no meaningful review.

Open peer review also paves the way for reviewers to claim credit for their work. This can be done anonymously via systems like Publons, which liaises with journals to validate the work of reviewers. Greater credit can of course be claimed if the reviewer agrees to sign their review, since this allows them to be cited and accessed more effectively (especially if they are assigned a DOI – digital object identifier). However, views at the meeting on whether reviewers’ names should be made public were mixed. There are concerns that early (and not-so-early) career researchers might pull their punches if reviewing the work of more senior people. And there are risks, as yet untested as far as I know, of possible legal action against reviewers who make mistakes or who criticise the work of litigious researchers or corporations.

The overheads associated with publishing reviews are not negligible. Some effort will be required to collate and edit reviews (e.g. to remove personal or legally questionable statements, though this should reduce as reviewers become accustomed to openness); and there are technical hurdles to the inclusion of publishing reviews in journal workflows. However, none of these is insurmountable, since several journals (e.g. EMBO Journal, PeerJ, Nature Communications) are already offering open peer review.

As Ron Vale wrapped up proceedings on the main day of the conference, I could sense him urging the room with every fibre of his being to recognise open peer review as an idea whose time has come. I think he’s right.

2. Proper acknowledgement of peer reviews by early career researchers (ECRs): Although the vast majority of peer review requests are issued to established researchers, in many cases the work is farmed out to PhD students and postdocs. When done properly, this can provide valuable opportunities for ECRs to learn one of the important skills of the trade, but too often their contributions are unacknowledged. Either the principal investigator does not bother to explain to the journal that the review is entirely or partially the work of junior members of their lab, or even if they do, the journal has no mechanism for logging or crediting that input.

It became clear at the meeting that this is an issue for ECRs and a sore one at that. Ideally of course, the fix would come from PIs being more transparent about how they handle their reviewing caseload, but a more effective solution would seem to be for journals to issue clearer guidelines – both to enable PIs to recruit ECRs to the task and to ensure that the journal formally recognises their contribution. Services such as Publons that offer to validate peer review contributions could also help out here.

3. Add subjective comments to ‘objective’ peer review: This is a counter-intuitive one. I am a fan of the ‘objective’ peer review established at open access mega-journals such was PLoS ONE, where the task of the reviewer is to determine that the work reported in the submitted manuscript has been performed competently and is reported unambiguously, but without any regard for its scientific significance or impact. But, as was pointed out in one of the workshops at the meeting, reviews adhering to these criteria may well be devoid of the full richness that the expert reviewer has brought to their close reading of the manuscript. It was therefore proposed (though Chatham House rules prevent me from crediting the proposer) that this expert opinion should be added to the written review. It should not form any part fo the decision to publish but inclusion of this additional information – on the significance of the study and the audiences that would be most interested in it, for example – would provide a valuable additional layer of curation and filtering for the reader.

This proposal may be tricky to implement because the editors of mega-journals already have enough trouble getting some reviewers to stick to the editorial standard of ‘objective’ peer review. But it does not seem impossible to add a separate comment box to the review for to ask the reviewer’s opinion – as a service to the reader – while making it clear that this will have no impact on the publishing decision. To me this is only a small additional ask of the reviewer, but one whose value is obvious. Such a move would also be a further barrier to the rise of the so-called OA predatory journals.

And finally… it would be remiss of me not to mention DORA, the San Francisco Declaration on Research Assessment which is endeavouring to wean the research community from the nefarious effects of journal impact factors. While the meeting was focused on peer review of research papers, it is also an important component of decisions about hiring, promotion and grant funding. The new steering group of DORA, of which I am now chair, were grateful for the opportunity to announce the reinvigoraton of the initiative and to discuss how DORA might help the community to develop more robust methods for research and researcher assessment.

And that’s it. Of course, you should feel free to offer peer review in the comments…

Posted in Academic publishing, Science | Tagged , , , | 1 Comment

Why I don’t share Elsevier’s vision of the transition to open access

Screenshot: Working towards open access

Last week Elsevier’s VP for Policy and Communications, Gemma Hersh, published a think-piece on the company’s vision of the transition to open access (OA). She makes some valid points but glosses over others that I would like to pick up on. Some of Elsevier’s vision is self-serving, but that should come as no surprise since the company has skin in the game and naturally wants to defend its considerable commercial interests.  And some of it is, frankly, bizarre. You can read the whole article for yourself – it’s not long. I would recommend also having a look at the reaction from the OECD’s Toby Green. Below, I have highlighted and commented  on (in blue) the portions that struck me, and tried to fill in some of the missing pieces of what is a very tricky puzzle.

The article opens constructively:

“Elsevier […] is thinking about how […] alternative access models tailored to geographical needs and expectations can help us further advance open access.”

  • ‘Tailored’ sounds like a euphemism. In part, it reflects consideration of differences in the research intensity of different nations (even in the developed world), which means that there would be winners and losers in a switch to a gold OA model funded by article processing charges (APCs); but there is no recognition of the constraints due to ongoing global inequalities. OA ameliorates that immediately as far as accessing the literature goes, though we need to think hard about how to create OA business models that address the challenges to authors from the global south.

“Elsevier and other STM publishers generally agree with many of the authors’ observations and recommendations, notably that there may be enough money in the system overall to transition globally to gold open access.”

  • How much money is ‘enough’? Readers should be aware that Elsevier has makes adjusted operating profit margins of around 37%. In 2016, according to the latest annual report of the parent company, RELX, this amounted to £800m profit on revenues of £2,320m for their science, technical and medical division. It’s no surprise that the company wants to protect their business. But that motive should be clear to all stakeholders, including academics and the public. Can publicly-funded researchers, who support high-profit publishers such as Elsevier, Wiley and Springer-Nature with their labour as authors, editors and reviewers, look the taxpayer in the eye and them they are delivering value for money?

“…One possible first step for Europe to explore would be to enable European articles to be available gold open access within Europe and green open access outside of Europe.”

  • This simply does not compute. It is a kind of double-speak that seeks to re-define unrestricted access – the original definition of open access – as restricted access, depending on your location. Hersh has defended this notion  as creative “outside of the box” thinking. Maybe so, but it’s also outside my comprehension.

Hersh then moves on to consider mechanisms for flipping subscription journals to OA:

“One successful model is the SCOAP3 program. A particularly powerful aspect of SCOAP3 (even if initially cumbersome to administer) stems from the detailed and systematic planning of the various ways in which money needs to flow through the system for journals to become exclusively gold open access. Money is carefully redirected from library budgets to a central pool administered by CERN, and from there to publishers in the form of APCs. […] Drawing on the principles of this program could help us all with the much broader challenge of transitioning all hybrid journals to become fully gold open access.”

  • The focus here, once again, is on money – and, in my view,  on preserving the status quo. SCOAP3 may well have shown that subscription journals may be flipped to OA but reaching agreement required complex and protracted negotiations and only worked because the deal was confined to a well-defined group of researchers with links to a single, large international facility. We are a long way from seeing how that might work in less focused disciplines. Hybrid OA (publishing OA articles in subscription journals) was originally proposed as a mechanism to fund flipping but it is a pathway that Elsevier seems not to recognise. Against accusations of “double-dipping” – in which hybrid OA journals keep subscription charges up even as the proportion of OA content grows – Elsevier has maintained that it simply doesn’t exist. Have they had a change of heart? 

“We believe that the primary reason to transition to gold open access should not be to save money (it won’t, and there will be winners and losers as costs are redistributed) but that it would be better for research and scholarship…”

  • Well, at least that’s (rightly) stated as a belief – an act of faith. It’s not a belief I share. There are many historical examples of new technology driving down costs and becoming available to the many, not the few: printing, telephony, cars, and digital cameras, for example. Admittedly in each of these cases a functioning market was required which is still lacking in scholarly publishing. The reasons for this are well known  and present a tough nut to crack, but if we are going to talk about money let’s also be up-front about profits and value for money. Stuart Shieber analytical post on the difference in value provided by commercial and non-profit publishers is illuminating on this point. It’s also worth remembering Elsevier’s tenacious defence of the Publisher’s Association decision tree, which to this day presents a incorrect (but revenue-raising) interpretation of the OA policy of Research Councils UK. 

“Advocates for a global transition to gold open access alone should be clear that an entirely gold open access system would cost more in some regions and for some institutions – especially those that are highly research intensive and therefore pay more in a “pay to publish” model – and that they consider this a price worth paying.”

  • To my mind this is an argument for getting governments and inter-governmental bodies to take a keener interest in these affairs – they are the major paymasters, after all.

“Another reason APCs would rise is that the money flowing into the current system from outside the academic research community – i.e., journal subscriptions from industry – is estimated to be about 25 percent of the total. In a “pay-to-publish model,” systemic costs would need to be borne by the academic research community rather than shared with industry.”

  • If about 75% of the total funding for publishing comes from universities & research institutes – public institutions for the most part – then this is yet another reason for governments to take the lead in not letting costs get out of control. There is a public interest at stake here, not least because of the close links between publicly-funded research and national policies on industry and innovation around the world .

The conclusions to piece contains this rhetorical flourish:

“A fully gold open access world inhabited only by predatory publishers who will publish anything as long as they are paid is not a healthy and prosperous world.”

  • For one thing, there is no serious prospect of a world “inhabited only by predatory publishers”. Such outfits, which scoop up APCs while providing no meaningful peer review, have gained purchase in some countries but are now feeling the heat of regulation – heat that will only increase as open peer review gains traction. Their negative impact is, in any case, arguable. Hersh also seems to be suggesting that cheaper open access journals are necessarily low-quality, but there are powerful counter-examples, such as PeerJ. (By the way, nothing is likely to me more effective at killing off predatory journals than evaluation systems that judge research papers on their intrinsic merits.)
  • For another, the talk about a “healthy and prosperous world” sounds a distinctly bum note after the earlier proposal to erect a great, golden paywall around Europe. There’s a striking contrast here with the G7 Science Communiqué, which was also published last week. We should of course always be circumspect about the pronouncements of politicians from the global stage but that document did at least articulate a vision to address global challenges and inequalities, and to use open science to bolster the robustness and utility of publicly-funded research.

And finally, we are left with the posturing (the italics are mine):

“The pace of change will ultimately be driven by researchers and the choices they make about how they wish to disseminate their research outputs. We can help them embrace open access by working closely with funders and research institutions to move beyond advocacy to possibility.”

    • Thus writes the commercial publisher advocate. Reader, beware.


Posted in Open Access | Tagged | 6 Comments

Does science need to be saved? A response to Sarewitz.

I wrote this piece a few months ago at the invitation of The New Atlantis. It was supposed to be one of a collection of responses to a polemical essay that they published last year on the parlous state of modern science by Dan Sarewitz. But the publication was delayed so, I have decided to go ahead and publish now. 

Update (30 Nov 2017): My response has now been published, alongside those of several other other correspondents and a reply from Sarewitz. These help to amplify and clarify some of the key points of the debate and make for very interesting reading, even if we have all yet to reach an agreed conclusion. 

Sarewitz article

In an essay published in The New Atlantis last year under the provocative title ‘Saving Science‘, Dan Sarewitz pulled no punches. He took exception to the post-war settlement based on Vannevar Bush’s 1945 claim that “Scientific progress on a broad front results from the free play of free intellects, […] in the manner dictated by their curiosity for the exploration of the unknown.” To Sarewitz, who is Professor of Science and Society at Arizona State University, the “great lie” of the power of curiosity-led inquiry has corrupted the scientific enterprise by separating it from the technological problems that have been responsible since the Industrial Revolution for guiding science “in its most productive directions and providing continual tests of its validity, progress, and value.” “Technology keeps science honest,” is Sarewitz’s claim. Without it, science has run too great a risk of being “infected with bias,” and now finds itself in a state of “chaos” where “the boundary between objective truth and subjective belief appears, gradually and terrifyingly, to be dissolving.”

Those are bruising words. Sarewitz has some important points to make about the interaction of science with the outside world (a theme he returned to in a more recent Guardian article), but the fevered rhetoric of ‘Saving Science seemed to me to dull his analytical edge.

Sarewitz is right to draw attention to the complex interplay between science and technology; and to the energising effects on science of the demands of governments, industry, and commerce to make solve problems. These interactions are under-appreciated in some scientific quarters. He raises valid questions about the publish-or-perish culture within academic research that yields too work that is uncited work or of questionable reliability. And his criticism of the tendency in biomedical research sometimes to fixate on animal models at the expense of progress in clinical research hits some valid targets.

In the end, however, Sarewitz overplays his hand. Technology has certainly been a powerful driving force in scientific productivity. Yes, it can keep science honest because there is no better test than a product, a process or a medical treatment that just works. But in Sarewitz’s telling, curiosity-driven research has produced just two fundamental advances of transformational power in the last century: quantum mechanics and genomics. To do so, he has to overlook a plethora of blues skies breakthroughs, such as the theory of evolution, antibiotics, plate tectonics, nuclear fission and fusion, the X-ray methods that cracked the structures of DNA and proteins, and monoclonal antibodies, that have been profoundly influential culturally and economically. At the same time, he underplays the stringency of the reality check that experiment and observation place on the free play of free intellects. It seems to me that both roads make for interesting journeys, even if can often be difficult to decide which is truly the more rewarding.

Sarewitz has every right to question how far scientists should be permitted to roam free from the demands of the societies that fund them But I can’t accept his prognosis science must be directed because, apart from a couple of particular examples that both involve management by the military, he doesn’t say how.

The oversight of science is quite properly a preoccupation of governments – as major funders – even if it raises perennially contentious issues of freedom and responsibility for the research community. But Sarewitz’s prescription of management by technology to keep science honest is too simplistic, for reasons that emerge – unconsciously? – in his discussion of ‘trans-science’. To Sarewitz, trans-science is research into questions about systems that are too complex for science to answer – things like the brain, a species, a classroom, the economy, or the climate. But missing from this list is science itself, and the social, political, and industrial ecosystems in which it operates. Unarguably, these are issues and phenomena of huge complexity and importance.

So, how are we to figure out how best to make science work? This remains an important question for all of society. Polemic may be great for stirring debate, but the answer lies in the closely argued detail. I suggest we proceed on all sides by respecting the evidence, acknowledging our limitations, and renewing our determination to improve the connections between science and the world beyond laboratory walls.

Posted in Science | Comments Off on Does science need to be saved? A response to Sarewitz.

BAMEed: the voices of the people

At the beginning of June I attended the first BAMEed conference. It was an unexpectedly memorable and inspiring occasion.

BAMEed Conference 2017
Final panel discussion at #BAMEed2017

Though billed as an “unconference” – a sort of self-organising gathering that fills old fogies like me with horror – the one-day meeting had in fact been meticulously planned. It was the brain-child of a newly-formed group of Black, Asian and Minority Ethnic educators (hence ‘BAMEed’) – Amjad Ali, Allana Gay, Abdul Chohan and Penny Rabiger – all of whom are or were teachers at primary or secondary schools. The attendees were themselves mostly primary or secondary school-teachers and mostly also black, asian or minority ethnic. I had the unusual and instructive experience of being in the minority.

The theme of the meeting was unconscious bias, a topic that has been moving up the agenda in higher education thanks in part to the ascendancy of the Athena SWAN charter. I know because I have attended a lunchtime workshop on it. But here, the subject was treated in greater depth, not least because of the breadth of lived experience that the speakers brought to it. Four weeks on I am still absorbing the many lessons from that sunny Saturday in Birmingham, but let me offer a few highlights.

First up was keynote speaker Professor Dame Elizabeth Anionwu. She told the story of her complicated upbringing as the unexpected child of Cambridge students, a Nigerian man and a white Catholic woman. With wit and candour she spoke of the cruelty she endured from nuns in her care home and from a drunken step-father, of discovering and getting to know her Nigerian father, and of the sheer bloodymindedness that propelled her through careers in nursing and academia.

Professor Damien Page* followed-up with a talk on unconscious bias training in schools and universities, cautioning that it has to be done thoroughly if it is not to be counter-productive. Most training communicates that biases are ‘naturalʼ and therefore risks providing people with excuses to discriminate: “everyone is biased, so I shouldnʼt be worried that I am too.” It is important not to distract people from structural and historical discrimination.

For me one of the most interesting presentations was given by Professor Paul Miller, who introduced the notion of “white sanction”, a concept he has developed in a recent paper surveying the experience of black and minority ethnic staff in secondary and higher education.

“White sanction” expresses the idea that BAME staff need white allies (or champions), or feel obliged to ‘join the clubʼ, to progress in their careers. Such allies can play a valuable role, at least in the transition to true equality, but Miller recognised the difficulty of the position. Some members of the audience expressed understandable resentment at the notions that allies or conformity are needed since these seem to legitimise existing power structures. In the view of many, the existence of “white sanction” is seen as a symptom that we lack a functioning meritocracy. I agree, though I’ve certainly come across dissenting views in academia. Miller argued itʼs better to focus on the structural problem to avoid the risk of calling out all whites as racist and putting even ‘alliesʼ on the defensive. His paper explores these ideas in more detail, where he defines four levels of institutional interaction with BME staff: engaged, experimenting, initiated, uninitiated. I wondered on Twitter where on the spectrum Imperial College, my own institution, might lie. Though I only got two answers, the indications are that we have work to do.

In the afternoon, Dr Christine Callender noted the glacial pace of change in opportunities for ethnic minorities by citing a call for action in the 1985 Swan Report on BME education, and a more recent one in a pithy article by Kara Swisher earlier this year that bemoaned the “mirror-tocracy” perpetuated by white men in the upper echelons of tech giants like Uber and Google. Real change has to come from engagement the top, insisted Callender, if institutions are to convert aspirational statements into action.

The plenary sessions were interspersed with short workshops – I attended the ones on recruitment and building diverse teams, led in each case with informed passion by Patrick Ottley-O’Connor and Hannah Wilson – and the day was rounded off back in the main hall with a panel discussion (see photo above).

As valuable as the talks were, the most memorable aspect of the meeting was the hearing the stories of teachers who have been short-changed by the status quo. None spoke intemperately, even in cases where there was just cause. Instead, there was a gritty and positive determination to tackle the problem head on, so that we all might do the right thing. Of course, none of this is new to the people involved, but it was a powerful reminder to me of the value of unfiltered testimony. The BAMEed Network has no need of my approval, but they have it anyway.


*Who bore an uncanny and distracting resemblance to my younger brother. 


Posted in Teaching | 2 Comments

University rankings are fake news. How do we fix them?

This post is based on a short presentation I gave as part of a panel at a meeting today on Understanding Global University Rankings: Their Data and Influence, organised by HESPA (Higher Education Strategic Planners Association).

HESPA University Rankings Panel - May 2017Yes, it’s a ‘manel’ (from the left: me, Johnny Rich, Rob Carthy). In our defence,  Sally Turnbull, who was chairing, sat off to one side and two participants (one male and one female) had to withdraw at short notice. Photo by @UKHESPA (with permission).

The big news on the release of the Times Higher Education World University rankings for 2017 was that Oxford, not Caltech is now the No. 1 university in the world.

According to the BBC news website, “Oxford University has come top of the Times Higher Education world university rankings – a first for a UK university. Oxford knocks California Institute of Technology, the top performer for the past five years, into second place.”

Ladies and gentlemen, this is what is widely known as ‘fake news’. There is no story here because it depends on a presumption of precision that is simply not in the data. Oxford earned 1st place by scoring 95.0 points, versus Caltech’s 94.3. (Languishing in a rather sorry sixth place is Harvard University, on 92.7).

The central problem here is that no-one knows exactly what these numbers mean, or how much confidence we can have in their precision. The aggregate scores are arbitrarily weighted estimates of proxies for the quality of research, education, industrial contacts and international outlook. And they include numbers based on opinions about institutional reputation.

In all likelihood these aggregate scores are accurate to a precision of about plus or minus 10% (as I have argued elsewhere). But the Times Higher (and most other rankers – I don’t really mean to single them out) don’t publish error estimates or confidence intervals with their data. People wouldn’t understand them, I have been told. But I doubt it. That strikes me rather as an excuse to preserve a false precision that drives the stories of year on year shifts in rank even though they are, for the most part, not significant.

Now Phil Baty, the editor of the Times Higher Rankings (and someone who, to give him his due, is always happy to debate these issues) is stout in his defence of what the Times Higher is about. A couple of months ago he wrote in an editorial criticising the critics of university rankings:

“beneath the often tedious, torturous ad infinitum hand wringing about the methodological limitations and the challenges of any attempt to reduce complex universities to a series of numbers, the single most important aspect of THE’s global rankings is often lost: the fact that we are building the world’s largest, richest database of the world’s very best universities.”

But who can define ‘best’? What is the quantitative measure of the quality of a university? Phil implicitly acknowledges this by conceding that “there is no such thing as a perfect university ranking.” I would ask, is there one that is good? Further, if the point is to assemble a database, why do the numbers in the different categories have to be weighted and aggregated, and then ranked? Just show us the data.

The problem, as is well known, is that these rankings have tremendous power. They are absorbed by university managers as institutional aims. Manchester University’s goal, for example, stated right at the very top of their strategic plan is “to be one of the top 25 research universities in the world”.* How else is that target to be judged except by someone’s league table? In setting such a goal, one presumes they have broken down the way that the marks are totted up to see how best they might maximise their score. But how much is missed as a result? Why not be guided by your own lights as to what is the best way to create a productive and healthy community of scholars? Surely that is the mark of true leadership?

Such an approach would enable institutions to adopt a more holistic approach to what they see as their missions as universities. And to include things that are not yet counted in league tables, like commitment to equality and diversity, or to good mental health, or – in these troubled times when we are beset on all sides by fake news – to scholarship that upholds the value of truth.

A couple of years ago, a friend of mine, Jenny Martin, who is a Professor at Griffith University in Australia suggested some additional metrics to help universities complete the picture. For example:

How fair is your institution – what’s your gender balance?
How representative are your staff and student bodies of the population that you serve?
How much of their allocated leave do your staff actually take?
How well do you support staff with children?
And… How many of your professors are assholes?

Now, Jenny may have had her tongue in her cheek for some of these but there is a serious point here for us to discuss today. How often do rankers think about the impact on the people who work in the universities that they are grading?

I would argue that those who create university league tables cannot stand idly by (as bibliometricians used to do), claiming that they are just purveyors of data. It is not enough for them to wring their hands as universities ‘mis-use’ the information they provide.

It is time for rankers to take some responsibility. So, I call for providers to get together and create a set of principles that governs what they do. A manifesto, if you will, very much in the same vein as the Leiden manifesto introduced in 2015 by the bibliometrics community.

To give you a flavour, the preface to the Leiden manifesto reads:

“Data are increasingly used to govern science. Research evaluations that were once bespoke and performed by peers are now routine and reliant on metrics. The problem is that evaluation is now led by the data rather than by judgement. Metrics have proliferated: usually well intentioned, not always well informed, often ill applied. We risk damaging the system with the very tools designed to improve it, as evaluation is increasingly implemented by organizations without knowledge of, or advice on, good practice and interpretation.”

What is true of bibliometrics is true of university ranking. Therefore I call on this community here today to take action and come up with its own manifesto. Since we are in London, we could even call it the London manifesto. (After Brexit, we’re about to become the centre of nowhere and nothing, it would be nice to have something for people to remember us by!)

I stand ready to help with its formulation. I urge you to consider this seriously and quickly. Because if providers won’t do it, maybe some of us will do it for you.

Thank you.


A couple of afterthoughts on the meeting:

It was noticeable that the rankings provider who spoke after the panel addressed more of the technical shortcomings and cultural issues of university league tables than those who presented earlier in the day. It is important to keep the debate on rankings and university evaluation alive.

I was surprised that there were relatively few questions after each talk from the audience, which consisted mostly of people involved in strategic planning at various universities. I hope that doesn’t indicate a certain degree of resignation to the agenda-setting power of rankers and, as a result, a reluctance to consider the broader impacts. But I remain concerned. In answer to my question about why one of the providers had bemoaned the fact that some university leaders rely too heavily on rankings, I was told – candidly –  that in some cases he felt it was a matter of poor leadership.

I was struck by an example mentioned by my co-panellist, Rob Carthy, from Northumbria University which pointed out one of the perverse effects of rankings. His university works hard to select and recruit Cypriot students even though they often only do one A level (a feature of the school system). In doing so, however, the average A level tariff of their intake drops which, on some league table measures, will reduce their score. The rankings therefore disincentivises searches for student talent that look beyond mere grades. I suspect they may also be reducing the motivation of some universities to widen participation.


*To be fair to Manchester, on this web-page the phrase appears to have been edited to read: “Our vision is for The University of Manchester to be one of the leading universities in the world by 2020.”

Posted in Science, Scientific Life | Tagged , , | 2 Comments

The Cathedral on the Marsh

I’ve already shared this video on Twitter and Facebook but wanted to post it here as a more permanent record. Two weeks ago I fulfilled the ambition, held since I had seen Nic Stacey’s and Jim Al-Khalili’s quite wonderful BBC documentary on thermodynamics, to visit the steam engines at the Crossness sewage pumping station. Three of the four engines stand idle, in rusted silence, while the fourth – oiled and glossed with fresh paint – huffs and shucks with mechanical life. With my iPhone camera I tried to capture some of the poetic rhythm of its motion.

I also took some photographs, which you can find on flickr.

There is something noble in creating a vaulted cathedral for these magnificent engines, which did such foul work. I was reminded of a line from Kenneth Clark’s television history of Western art in which he describes the grand rooms within the old naval hospital at Greenwich (a few miles upstream of Crossness) and pauses to reflect: “What is civilisation? A state of mind where it is thought desirable for a naval hospital to look like this and for the inmates to dine in a splendidly decorated hall.”  The hospital opened its doors in 1712 but was converted into the Royal Naval College in 1873, just a few years after Crossness itself started pumping.

I shall have to pay it a visit – that can be my next ambition.

Posted in History of Science, Science | Tagged , , | Comments Off on The Cathedral on the Marsh

The March for Science: advocacy masterstroke or PR misfire?

Last night made my way to an upstairs room at The Castle pub near Farringdon to participate in a debate organised by Stempra on the forthcoming March for Science.

March for Science debate
The panel (Photo by Anastasia Stefanidou)

The question before the panel and the assembled audience was whether the call to arms, first issued by scientists in the USA but now heard and answered across the globe, is an “advocacy masterstroke or PR misfire”. In truth, there was not much actual debate, though there was plenty of discussion. The panel, consisting of myself, Fiona Fox (director of the Science Media Centre), and environmental science writer and campaigner Mark Lynas, broadly agreed that the march is a good thing, albeit for different reasons and with different qualifications – and we’ll all be marching in London on Saturday. Here below, for what they are worth, are the opening remarks that I prepared.

For other interesting takes on the March for Science, I can recommend this article by Michael Halpern and Ed Yong’s interview with Hahrie Han, a social scientist who studies activism.

“When I first heard about the March for Science in London I was extremely lukewarm about the whole idea. 

I could see the point of Americans marching for science, given the threats posed by the Trump administration – both to evidence-based policy on issues like climate change and vaccination, and to the science budget. 

Those threats are hard to judge given the wayward nature of Trump’s administration. He has shown himself to be ham-fistedly ineffectual, both on his travel ban from countries with large Muslim populations, and the attempt to overturn Obamacare. Even so, if I were working in the USA right now, I would not have hesitated to sign up in support because there are specific policies to protest. 

But I did wonder about the spill-over of the protest to the UK and the rest of the world. I could see why the March for Women, initiated in the US also in reaction to Trump, had rapidly spread to the rest of the world – because the issues of women’s rights and gender equality are ones that are very much alive – in many forms – across the globe.  

But although I am a scientist – and therefore very pro-science! – I didn’t immediately see the point of the march for science in the UK. (I can’t speak for the rest of the world). 

It’s not that I’m one of those researchers whose focus is entirely on my research. I’ve campaigned before for science. I was one of the founder members of Science is Vital and helped organise our rally to protest against the threat of cuts to the science budget back in 2010 – and on several other occasions since then. 

We’re a very small and entirely amateur organisation (though hopefully not too amateurish!), so we are mindful to concentrate our efforts. And to do that we rather deliberately concentrate our messaging, which has almost always been about making the case for public investment in R&D. And to that end, we have tried to tailor our campaigns so that they are heard by politicians and I think we’ve had some success in doing that. 

But what is the point of the march for science? The political point, I mean? What are the aims? As stated on the web-site, these are:

“The March for Science celebrates publicly funded and publicly communicated science as a pillar of human freedom and prosperity. We unite as a diverse, nonpartisan group to support science that upholds the common good, and political leaders and policymakers who enact evidence-based policies in the public interest.”

Well, who could be against that? The organisers’ statement is at once the march’s greatest strength and greatest weakness. 

It’s broad enough and vague enough to accommodate all-comers. Hopefully (and I am mostly a hopeful person), it is roomy enough to find place for the diversity of voices within the scientific community and those who feel peripheral to it or even excluded (though that could well be wishful thinking on my part). I know there has been lots of discussion about failings on diversity and inclusion by the US organisers but, the UK organisers appear to me to have done a better job. There seems to me to be a decent mix of speakers for the London march and I hope that from them – and from the placards brought by marchers – a diversity of voices and viewpoints will be heard. We’ll see…

But by being so broad and so vague, the organisers avoid asking some of the hard questions that come up when the gears of science and politics mesh (or crash, depending on who’s driving). On their web-site the organisers have mentioned to the concerns about the emergence of “post-truth” or “post-fact” types of public discourse in the EU referendum and in Trump’s election campaign. That’s certainly worth protesting, but the follow-up questions are hard. 

For example, should we regulate the media more forcibly than we do to ensure that they are factual? Or does that play into the hands of those who are all too ready to cry “fake news”? Should we insist that scientists volunteer to do more spots on the Today programme? How far should scientists stray into the political domain, and how far outside their field of expertise should they be allowed to speak? Does speaking out compromise their scientific integrity? It certainly opens them to attack – as any climate scientist will tell you. 

I think all of these questions have answers (and I hope we’ll get to some of them in the discussion). The answers aren’t easy and they involve issues that we probably need to keep constantly under review. 

I don’t think these questions will be addressed in the march on Saturday. Marching is a rather blunt instrument for democratic discourse. 

And these aren’t even all the questions that we need to be asking about science. I think there’s another debate to be had about how much the public should be enabled to influence the science that they fund. That was one of the disconnects that emerged most strongly in my mind in the painful aftermath of the EU referendum. To me, inequality is the single most important issue facing Britain today – in education, in employment, in healthcare – and it is one that is not being touched by all the hullaballoo over Brexit. And yet it once was. It was one of the issues that seeded movements like Science for the People in the US or the British Society for Social Responsibility in Science, which brought scientists and the public together in the 1970s to try to find ways to do science that was relevant to people’s lives. (For more background on these movements, see Alice Bell’s article in Mosaic).

Now, I’m not suggesting that science doesn’t touch people’s lives in many different and important ways today – clearly it does, and that’s worth shouting about. But there is a sense of powerlessness out there, felt by many people. And I wonder what science and scientists can or should do to address that? 

So, I see the march for science as that quintessentially scientific thing: an experiment. An experiment that may well fail, that may have to be repeated with improved methods, or to test a modified hypothesis. But an experiment that is still worth trying because it makes you think and, I hope, will get people talking.” 


Posted in Science & Politics | 2 Comments

Grim resolve at the House of Commons on the scientific priorities for Brexit

On Tuesday morning last week MPs, MEPs, and representatives of various organisations with a stake in post-Brexit UK science gathered in the Churchill Committee room at the House of Commons for the launch of  the “Scientific priorities for Brexit” report, published by Stephen Metcalfe MP, chair of the Commons select committee on science and technology.

Science Priorities for BrexitVenki Ramakrishnan, President of the Royal Society, addresses the meeting in the Churchill Room.

The report, released under the auspices of the Parliamentary and Scientific Committee, is the result of widespread consultation with universities, learned societies, campaign groups and industrialists. It pulls into sharp focus the principal challenges raised by Brexit and sets out priorities for the British negotiators who will have to hammer out an exit deal with the EU over the next 18 months, and for the government’s domestic R&D policy.

The recommendations are split into four key areas. First and foremost are the rights of EU nationals working in the UK R&D sector – at all levels.

It was heartening to see people identified as the central concern. However, the recent failure of Parliament to pass an amendment to the Article 50 bill which would have guaranteed their residency rights nationwide hung over the room like a grey cloud. It was in my opinion a callow piece of political miscalculation,  a view that I do not hold alone. At the meeting Labour MEP Clare Moody relayed the views of UK nationals living in the EU whom she’d spoken to. All were clear in their belief that it would in fact be their interests  for the UK government to behave honourably with regards to EU nationals living in Britain.

The remaining three areas are investment in research and innovation, collaboration and networks, and regulation and trade. In brief, the report emphasises the importance of finding some way to maintain access to EU funding mechanisms, the immense value of the multilateral collaborative networks that the EU is so good at fostering, and the need to ensure that the UK remains at the leading edge of developing regulatory frameworks to facilitate trans-national science and trade.

I won’t go into further details because the summary statement is succinct (and supported by a longer evidence report) – and because I have covered some of this ground before. But I want to reiterate the point that the government’s current and welcome commitment to underwrite UK participation in the EU’s Horizon 2020 research programme will not be sufficient to protect UK-based researchers looking to forge long-term collaborations in the EU. This is because their European colleagues cannot have confidence in the ability of their British partners to continue to collaborate on funding applications made post-Brexit. The Chancellor’s announcement in the Autumn statement of significant new money for UK research and innovation over the next four years is certainly encouraging – as indeed is the CBI’s new-found enthusiasm for a total UK R&D spending target of 3% of GDP (re-iterated at the meeting) – but the government needs to go faster and further. In particular, it needs to articulate clear plans for shoring up Britain’s international research prospects beyond 2020.

Those prospects seem to take a hit every time one of the more thoughtless Brexiteers opens his mouth. Boris Johnson has been upbraided for his wayward rhetoric on Britain ‘liberating’ itself from the EU. And this past week Bill Cash MP surprised the European Scrutiny Select Committee by suggesting that UK negotiators should remind the EU of the cancellation of German debt after World War 2. Such arrogant pronouncements, coupled with the fact that Brexit has been dubbed ‘Empire 2.0’ in the corridors of Whitehall, serve only to confirm the impression that Brexit is driven by an ignorant and backward-looking vision of Britain and its history. They are squandering the good will forged over many years thanks research collaborations and many other joint activities between Britain and the EU.

But it’s not just the image of Brexit, put about by the likes of Johnson and Cash, that is the problem. While it was encouraging last Tuesday to see politicians come together from across the political spectrum to fight for science in the forthcoming negotiations, the mood in the plush surroundings of the Churchill Committee room was one of grim resolve. The difficulties we face are deep-rooted because of the way that Brexit rips through the international fabric of science, plunging us in a direction that takes us away from our colleagues, from our collaborations, and from greater purpose. I thought by this stage I would be less angry. But I’m not, and this fight goes on.

Posted in Brexit, Science & Politics | Comments Off on Grim resolve at the House of Commons on the scientific priorities for Brexit

Science, art and Art

Last week I attended the award ceremony of the Wellcome Image Awards. Every time I go to this event I tell myself I’ll submit an entry for the following year, but somehow I never manage to get a submission organised. I suspect my opportunities are dwindling because the standard of entries seems to be getting higher and higher.

Rita Levi-Montalcini

The competition is often dominated by images from microscopy. Many of these are beautiful and arresting but this year I was pleased to see more entries that were more self-consciously artistic – with a capital ‘A’. They told stories rather than simply creating eye-catching abstractions from cells or biomolecules captured in microscopes, and I say that as someone who has worked in the molecular realm for the whole of his research career and made efforts to convey the strange appeal of studying life at the level of the atom.

My favourite piece in this year’s competition is the digital painting of Rita Levi-Montalcini by Russian artist Daria Kirpach. Levi-Montalcini, an Italian Jew, was forced to work in secret during the Mussolini regime. After the end of World War II she moved to continue her research in the USA and was jointly awarded the 1986 Nobel Prize in Physiology or Medicine for discovering nerve growth factor (NGF). Her life and her life’s work – and her elegant style of dress – are rendered with graceful simplicity in Kirpach’s portrait.

However, the competition judges were of a different mind and selected a haunting rendition of Crohn’s Disease called “Stickman” as the overall winner. But I am happy to concede that it too is a powerful work of Artistry.

Of course, your tastes may differ too. Have a look for yourself. You can see all 22 entries  on Wellcome’s web-site – and vote for your favourite.


Posted in Science & Art | Tagged | Comments Off on Science, art and Art

Interview with the author

Those of you who have read all 346 posts on my Reciprocal Space blog will have no need to read this one. You probably already have a sense of what I do and what I’m like – my science, my hobbies, my hobby-horses, and my foibles.

But on the off chance that you’re new here, or are a faithful reader who just can’t get enough, here is an interview I did with The Free Think Tank. I say ‘interview’ though this was no Michael Parkinson affair. There was no cosy tête-à-tête in comfy armchairs. Instead, it was pure Web2.0 – they sent me an email with questions, I wrote my replies, and they posted the whole thing online, prefaced by a short introduction and illustrated with a photo taken of me at the EBI last year which refutes the hypothesis that black shirts make me look slim.

Stephen Curry speaking at the EBI in 2016Does my tum look big in this? 

I figured there was no point in doing the interview without giving honest answers, so I tried to be candid, even if I couldn’t prevent myself straying from seriousness from time to time. These things are all a matter of taste but I think my interview is definitely funnier than the ones with Jim Al-Khalili and Athene Donald, though probably not as comedic as Dean Burnett’s

Whether it’s more informative or inspirational to younger minds, I will leave for you to decide.


Posted in Scientific Life | Comments Off on Interview with the author