Academic freedom and responsibility: why Plan S is not unethical

Since its announcement on 4th September the European Commission’s plan to make a radical shift towards open access (OA) has caused quite a stir. Backed by eleven* national funding agencies, the plan aims to make the research that they support free to read as soon as it is published. This is a major challenge to the status quo, since the funders are effectively placing subscription journals off limits for their researchers, even if the journals allow green OA (publication of the author-accepted manuscript) after an embargo period; Plan S also specifically excludes hybrid open access except in cases where journals have an agreed schedule for flipping to OA. The plan has been welcomed as “admirably strong” by OA advocate Peter Suber, though he has also offered cautionary notes on some aspects. Others have been less enthusiastic. A central charge, from some publishers and some academics is that Plan S is an infringement of academic freedom to choose how and where your work is published and it therefore unethical. 

I disagree. The claim that Plan S is unethical derives from an understanding of academic freedom that appears to me to rest on foundations that, if not shaky, are at least highly questionable.

I realise this is contentious territory. And I accept that the views of many scholars on this issue spring from deep convictions about the nature of the academic calling. There are real complications and tensions, not the least of which are disciplinary differences, especially between the natural sciences, engineering and medicine on the one hand, and the arts and humanities on the other. I therefore propose to tread carefully. But ultimately I believe the disputes about Plan S can be located at the blurry, shifting and negotiable boundary between academic freedom and responsibility and it is this boundary that I am mainly interested in exploring.

Let’s look first at the particular claims made with regard to the incursion of Plan S into academic freedom.

The statement made by the International Association of Scientific, Technical and Medical Publishers (STM) asserts that “it is vital that researchers have the freedom to publish in the publication of their choice”. No documentary support is offered to justify its interpretation of academic freedom so we must look elsewhere. Before doing so we should note that the STM’s view fits well with the status quo – a state of affairs that is financially very rewarding for many publishers, so there is a vested interest in play that must inevitably colour any assessment of their claims.

The academic view advanced in trenchant terms by Kamerlin and colleagues is that “Plan S clearly violates one of the basic tenets of academic freedom – the freedom to publish research results in venues of the researcher’s choosing.” However, despite the fact that the authors of this piece are all researchers or scholars of one type or another, they give no citations to back up their claim.Perhaps they take that feature of academic freedom as a given, something that is so widely accepted that it no longer requires a supporting reference. If so, that is a difficult position to sustain. For one thing, choice of publishing venue is not mentioned at all in the community-authored Wikipedia article on academic freedom. If we are to properly debate the question of whether choice of publication venue is a “basic tenet” of academic freedom, we need an evidence base of some sort.

J Britt Holbrook, one of Kamerlin’s co-authors, goes into more depth in a response to Marc Schiltz, the president of Science Europe and one of the prime movers behind Plan S. Holbrook mentions the definition offered by the American Association of University Professors in their 1940 Statement of Principles on Academic Freedom and Tenure, which presumably is widely accepted by scholars in the USA, though I don’t know how much purchase this document has in the rest of the world. The AAUP defines three elements to academic freedom, the first of which pertains to the conduct of research:

Teachers are entitled to full freedom in research and in the publication of the results, subject to the adequate performance of their other academic duties; but research for pecuniary return should be based upon an understanding with the authorities of the institution.”

The key phrase is in the first sentence – “full freedom in research and in the publication of results”. To some that implies full freedom in the choice of where to publish. As I understand Holbrook, this is the interpretation that he places on it. But the document isn’t specific on the point and other interpretations appear equally valid. The intent could be merely to guarantee the right to publish, free from any political or institutional constraint on what you can write, without implying that authors must also be completely free in the choice of venue. Indeed, on this point it is important to note the second, qualifying part of the sentence – “subject to the adequate performance of their other academic duties” – which raises the question of academic responsibilities, at least to their university.

The latter part of that quotation from the AAUP statement is perhaps even more relevant to Plan S (which, as I understand it, applies only to funded research). The AAUP notes that “research for pecuniary return” must be subject to an “understanding” with the employing university. The phrase “pecuniary return” is open to various interpretations. It could mean a direct fee, or grant funding that contributes directly or indirectly to the academic’s salary. The AAUP is rather vague on the point, but this qualifier does seem to suggest that academic freedom is subject to consideration of the interests of other stakeholders.

This isn’t the only instance of vagueness in that short sentence. Consider what exactly the AAUP means by “full freedom in research”. Clearly, in part, it seems reasonable to interpret this as asserting freedom to research any question or topic. But what about academic freedom in their tools of enquiry? In the UK, for example, when funding is awarded to researchers for the purchase of large items of equipment, there is usually a requirement to ask for price quotations from two or more suppliers and an accompanying duty to secure good value for money in the purchase. On a narrow interpretation of the AAUP statement, this would constitute an infringement of academic freedom, though I have never heard of anyone making such a complaint. While this may seem like a trivial consideration in the present discussion, to me it illustrates the complexity underlying even apparently simple statements. Here it is a reminder of the need to be very careful of context when examining the justice of particular claims made for academic freedom.

Where else might we look for greater clarity on the shared understanding of academic freedom? One of the more detailed documents that discusses and defines academic freedom is UNESCO’s Recommendation concerning the Status of Higher-Education Teaching Personnel, published in 1997 and first brought to my attention earlier this year on Twitter by the journalist Richard Poynder. This is a long and detailed document but in paragraph 12, highlighted by Poynder, there appears to be an unequivocal assertion that the definition of academic freedom includes choice of publishing venue:

“higher-education teaching personnel should be free to publish the results of research and scholarship in books, journals and databases of their own choice and under their own names, provided they are the authors or co-authors of the above scholarly works.” 

However, again, it is important to examine the context of such statements and here at least the UNESCO document is more helpful than the much briefer AAUP statement. UNESCO provides a lengthy preamble to locate its definition of academic freedom in appropriate social, political and academic contexts. The preamble reflects not just the rights of academics but also their duties, and the rights and expectations of other stakeholders. In composing the document, the authors are “conscious that higher education and research are instrumental in the pursuit, advancement and transfer of knowledge” and “conscious that governments and important social groups, such as students, industry and labour, are vitally interested in and benefit from the services and outputs of the higher education systems”; they are concerned “regarding the vulnerability of the academic community to untoward political pressures which could undermine academic freedom”; and mindful that “the right to education, teaching and research can only be fully enjoyed in an atmosphere of academic freedom and autonomy for institutions of higher education and that the open communication of findings, hypotheses and opinions lies at the very heart of higher education.” (Italicised emphasis above and elsewhere added by me).

In outlining the guiding principles of the UNSECO document, the issue of academic responsibility is fleshed out in more detail. Its authors note the need to ensure that academics are free to “pursue new knowledge without constriction” but also note that “teaching in higher education is a profession: it is a form of public service that requires of higher education personnel expert knowledge and specialized skills acquired and maintained through rigorous and lifelong study and research; it also calls for a sense of personal and institutional responsibility for the education and welfare of students and of the community at large.” They also observe that “where public funds are appropriated for higher education institutions, such funds are treated as a public investment, subject to effective public accountability”, that “the funding of higher education is treated as a form of public investment the returns on which are, for the most part, necessarily long term, subject to government and public priorities” and that “the justification for public funding is held constantly before public opinion.”

These statements come just before paragraph 12 (quoted in part above) and must be borne in mind if we are to better understand the authors’ intentions. To help with that, it is worth also considering paragraph 12 in full:

“12. The publication and dissemination of the research results obtained by higher-education teaching personnel should be encouraged and facilitatedwith a view to assisting them to acquire the reputation which they merit, as well as with a view to promoting the advancement of science, technology, education and culture generally. To this end, higher-education teaching personnel should be free to publish the results of research and scholarship in books, journals and databases of their own choice and under their own names, provided they are the authors or co-authors of the above scholarly works. The intellectual property of higher-education teaching personnel should benefit from appropriate legal protection, and in particular the protection afforded by national and international copyright law.”

The preamble and principles put a clear emphasis on academic freedom as a freedom from undue political interference in the questions that academics may ask and write about, and it is this concern that seems uppermost in their minds when they write about the freedom to publish. One interpretation of their specific statement about journal choice could therefore be that they wanted academics not to be forced by instructions from governments or institutional leaders to only publish in venues that might limit the exposure of their ideas. There is a clear emphasis also in the document on public service and accountability, which envisages academic scholarship and professionalism as things that are valued certainly, but also as things in which the public has a vested interest. Might that interest also reasonably include a desire to facilitate the widest possible dissemination of research and scholarship?

Further uncertainty about the precise intent of the UNESCO authors with regard to mentioning choice of publication venue in their definition of academic freedom comes from their restatement of the definition in paragraph 27 as:

“the right, without constriction by prescribed doctrine, to freedom of teaching and discussion, freedom in carrying out research and disseminating and publishing the results thereof, freedom to express freely their opinion about the institution or system in which they work, freedom from institutional censorship and freedom to participate in professional or representative academic bodies.”

In this formulation there is no specific mention of choice of where to publish. If, as claimed by Kamerlin and colleagues, this choice is a “basic tenet” of academic freedom, one would expect to see it stated consistently in authoritative documents.

I would argue that neither the AAUP statement nor the UNESCO document posits the free choice of where to publish academic work as a core component of academic freedom or an unfettered right. The various statements, particularly in the UNESCO document, about the wider public interest in academic research and teaching support the view that academic freedom is understood to operate in an environment where a reasonable level of academic responsibility and accountability are also expected.

Clearly there are complications here, but I hope at least that the above analysis gives a clearer view of the boundary where Plan S has landed. For what it’s worth I believe that academics should certainly be as free as possible to choose where to publish, in acknowledgement of their professionalism and expertise. I think it is therefore important that the implementation of Plan S strives to ensure that there remains a rich variety of outlets. But we also need to acknowledge that at present academics’ publishing choices are constrained by the perverse incentives that have grown up around metrics of journal prestige. For that reason, I was pleased to see that reform of research evaluation is at the heart of Plan S. If it can help to drive real change on this front, arguably Plan S will make a positive contribution to academic freedom.

Clearly also, the questions of how different academics envisage their responsibilities and what might constitute a reasonable level of public accountability remain valid matters for debate. One also has to remember that the UNESCO document was written in 1997, well before the Berlin Declaration on Open Access which came along six years later. It seems highly unlikely that the UNSECO authors would have given much consideration to the implications of the impact of open access on academic freedom. That said, paragraph 13 hints that the authors might have seen open access, an idea that owes its being to the digital technology of the internet, as an opportunity to enhance academic freedom by facilitating the exchange of ideas:

“13. The interplay of ideas and information among higher-education teaching personnel throughout the world is vital to the healthy development of higher education and research and should be actively promoted. To this end higher-education teaching personnel should be enabled throughout their careers to participate in international gatherings on higher education or research, to travel abroad without political restrictions and to use the Internet or video-conferencing for these purposes.”

Of course, we cannot know what their view might have been of how open access or Plan S interact with academic freedom. Times have moved on and the UNESCO document is now over twenty years old. But these are matters that have to be kept under continuous review and debate as technology and norms change. One point where I am in complete agreement with Kamerlin and her co-authors is that the voices of academics certainly have to be heard as Plan S moves to implementation and I understand that moves to facilitate that dialogue are already afoot.

Part of that dialogue will also need to consider the question of licencing under Plan S which is need of clarification. If I have read carefully, a significant portion of Holbrook’s claim that Plan S is unethical is related to the plan’s explicit and implicit requirement for a CC-BY licence on published research, which permits access and re-use, in whole or in part, by commercial and non-commercial users as long as proper attribution is made to the original authors. While it should be emphasised that Plan S also envisages authors retaining copyright on their publications (perhaps along the lines developed by the UK Scholarly Communications Licence), there is resistance to the use of CC-BY licences, particularly among some humanities scholars (though not all) and also among some researchers in the natural sciences.

I am not going to dwell on the details of this debate except to observe that humanities scholars appear to be more personally invested in their published works (in addition to often not being directly funded to write them, though the question of funding raises yet further complications) and that that inevitably affects their view of their rights as authors. Many of these issues have been explored in depth by Martin Eve in his freely available book, Open Access and the Humanities. I will focus instead on the perceptions of research scientists, largely informed by my own experiences.

I think it is fair to day that there is still a lot of confusion among academics about what licensing means. Like many other scientists, for most of my career I never gave copyright a second thought and willingly – though not really knowingly – transferred it for free entirely and exclusively to the publishers of my papers. But I’m thinking about it now thanks to the debates aroused by open access about what scholarly publishing is for. And I find that I can’t think about it properly without also taking account of my rights and responsibilities as a researcher and an academic. Some of those responsibilities arise from the contracts that I entered into when I accepted funding for my research – typically from a publicly-funded UK research council or a charity such as the Wellcome Trust. In the same way that I didn’t see requirements to submit equipment purchases to a process of competitive tender as unreasonable, I haven’t seen requirements that seek to maximise the accessibility of my published results as undue interference in my research or an infringement of my academic freedom. This is because I accept there has to be a reasonable balance between my freedom to explore the questions of my choosing, and the funders’ interest in the return on their investment.

Of course, there is a proper debate to be had about what funders should consider to be a reasonable return. The arrival of Plan S is just the latest opportunity to revisit it. At present in the UK it seems to me we have – give or take the occasional protest – a shared recognition among politicians, academics and the public that there needs to be a balance between blue skies, curiosity-led research and work that is more strategic or applied. In an open society, the particular point of balance is rightly a matter of public debate between these different stakeholders. In Britain in recent years much of this discussion has revolved around the tensions arising from the impact agenda, a topic that has been explored in an interesting and constructive article on academic freedom published by Holbrook in 2017. Holbrook raises important points about the neoliberal capture of science policy via the impact agenda (particularly by the EU), though in my view overplays the influence of that agenda and underplays the extent to which seeking real-world impact from research work is in tune with the motivations that bring many scientists to the laboratory in the first place. Nevertheless, he concludes that academics need both to develop a more positive view of academic freedom that embraces the common good and to engage in conversation with “the communities in which our universities are situated”.

On that last point we can certainly agree. I look forward to the continuing conversations – with other academics and the communities we serve – about Plan S and about how best to define and protect academic freedom.

*Update 01 Oct 2018, 10:23 – there are now twelve funders backing Plan S. Thanks to Ross Mounce for the correction.

Posted in Open Access | Tagged , , | 48 Comments

Ten Years a Blogger

NatureNetwork-RS-FirstPost.crop

Today is the tenth anniversary of my very first blog post. When I look back at that day in 2008 when I set out my stall on Reciprocal Space it seems a long time ago and a long distance away. It’s been quite a journey.

Some things haven’t changed. I still hate the terminology, though I have mostly managed to swallow my embarrassment.

“I know the etymology: web-log, b-log, blog. It makes perfect sense but it’s such a silly-sounding word that it seems to demean the process. I would be embarrassed to admit that I do it, just because it has such a stupid name.”

And my blog manifesto is unaltered, though I like to think that my writing has on occasion given readers pause for reflection.

“I won’t promise to post regularly; that way I will avoid the repetition of future apologies for failing to write. I won’t promise to be unembarrassed to admit that I am a blogger. I won’t promise to have anything terribly insightful to say.”

Starting a blog has had many unintended but interesting consequences. I enjoyed the free rein of it, the chance to write about whatever liked – books I’d read, nights at the synchroton, time served on grant panels and, of course, impact factors.

I launched Reciprocal Space not long after becoming a professor but blogging led me as never before to reflect on the business of research, the internal culture of academia and the interactions of scientists with the rest of society. It got me involved in Science is Vital’s impassioned and effective campaign for science funding in the UK; in debates about open access and scholarly publishing (still very much ongoing and in the news this week with the announcement of European Commission’s radical Plan S); in policy work on research evaluation – I co-authored the Metric Tide report and now have the honour of serving as Chair of the San Francisco Declaration on Research Assessment (DORA); and even, sporadically, in film-making.

As part of the Occam’s Typewriter crew I also had the privilege of writing for the Guardian Science Blog network for the past six years. That was exciting and terrifying in equal measure but allowed us to reach entirely new audiences. Sadly the Guardian decided to close the network at the end of August. None of us involved are quite sure why. Perhaps it wasn’t generating the traffic needed to be sustainable. Perhaps their priorities simply shifted. But I am sorry to see it go, not only for the loss of a prominent platform, but because I think that the bloggers there were often able to provide a reasoned counter to some of the wilder claims made daily in politics and the media. I think I had always tried to take the wider, more understanding vew – even of Brexit (though in 2016 I severely over-estimated the British government’s capacity for pragmatism) and the opinions on science of Simon Jenkins.

But times change. The blogosphere itself has altered beyond recognition from the heady days of 2008. Much of the commenting activity has migrated to Twitter, whose short form seems only to have intensified the vitriolic tendencies of discussion on social media. And nothing lasts for ever, so the termination of the Guardian Science Blog network is chance to think anew. Several new portals have already opened – at the new Cosmic Shambles blog or on *Research. Others are currently under discussion.

For myself, I’m currently considering options. I hope at least to contribute to the Scientists for EU blog; as far as am concerned Brexit remains the most serious threat to the polity and prosperity of this country.

I am also trying to recalibrate how I can balance writing with my responsibilities at Imperial, where I have taken on the role of Assistant Provost for Equality, Diversity and Inclusion (in which capacity I recently started another blog!). Transitions and balances can be tricky but I sense an opportunity and, though ten years older, I feel strangely energised for a new challenge.

A chapter may have closed but the book is still open.

Posted in Blogging, Communication, Scientific Life | 4 Comments

DORA, the Leiden Manifesto & a university’s right to choose: a comment

The post below was written as a comment on Lizzie Gadd’s recent post explaining in some detail Loughborough University decision to base their approach to research assessment more on the Leiden Manifesto than DORA, the Declaration on Research Assessment. So you should read that first! (The comment is currently ‘in moderation’ because, like myself, Lizzie is on holiday. I suspect she is more disciplined that I am at not looking at her email whilst on holiday. I’ll update this post once the comment is approved.

Update (18-07-18, 15:30) – the comment has now been approved. I suggest any further discussion takes place beneath Lizzie’s original post.

Even though I am currently chair of the DORA steering committee, I don’t want to get into ‘theological’ arguments about the differences between DORA and the Leiden Manifesto because they are both forces for good! Moreover, I am sure that Lizzie agrees with me that ultimately it is the development of good research assessment practices that matter and I again applaud the work that she has been doing on that front at Loughborough.

Nevertheless, I want to argue for a more expansive interpretation of what DORA means (as a declaration and an organisation) than is presented here.

It is perfectly true that DORA was born in 2012 but it would not be correct to suppose that the declarationis any more fixed in time than the Leiden Manifesto. Although the first and most prominent recommendation of the declaration is stated negatively (“Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”), the remaining 17 recommendations are almost invariably positive, encouraging adherents to think about and create good practice. It does not limit how they should do that. The declaration is not very long so I would encourage everyone to read it in full.

Nor should it be supposed that DORA’s relevance is confined to the sciences; it has always aimedto be “a worldwide initiative covering all scholarly disciplines”. Admittedly, the work to extend that coverage has been lacking, but as the recently published roadmapmakes clear, we have now placed a particular emphasis on extending DORA’s disciplinary and geographical reach. We are in the process of assembling an international advisory board from all corners of the globe. We would be glad to hear from arts, humanities and social science scholars about how DORA can help them to promote responsible research assessment in their fields.

Lizzie draws a careful distinction between not using journal metrics to assess the quality of research outputs and using them to assess ‘visibility’. She is right to do so because there is a risk it might send a subliminal message to researchers. It would be interesting to hear from Loughborough’s researchers how they interpret the guidance on these points.

This distinction is the basis of Lizzie’s argument that, because Loughborough wishes to incentivise it’s researchers to make their outputs more visible, they could not in good conscience sign DORA. I can see how that is an honest interpretation of the constraints of the declaration, but my own view is that it is too narrow. The preamble to the declarationlays out the argument for the need to improve the assessment of research ‘on its own merits’. This and the thrust imparted by the particular recommendations of the declaration show that it is the misuse of journal metrics in assessing research content – not its visibility – that is the heart of the matter. It seems to me that Loughborough’s responsible metrics policy is therefore not in contravention of either the letter or the spirit of DORA.

In the end, as Lizzie rightly states, it is Loughborough’s call and, again, I am sure that Lizzie and I have in common a strong desire to promote good research assessment practices. I stand by what I wrote back in 2016, in a piece bemoaning the fact that so few UK universities had yet to sign DORA:

“I would be happy for universities not to sign, as long as they are prepared to state their reasons publicly. They could explain, for instance, how their protocols for research assessment and for valuing their staff are superior to the minimal requirements of DORA. It’s the least we should expect of institutions that are ambitious to demonstrate real leadership.”

 

Posted in Academic publishing | Comments Off on DORA, the Leiden Manifesto & a university’s right to choose: a comment

Ready-made citation distributions are a boost for responsible research assessment

Though a long-time critic of journal impact factors (JIFs), I was delighted when the latest batch was released by Clarivate last week.

It’s not the JIFs themselves that I was glad to see (still alas quoted to a ridiculous level of precision). Rather it was the fact that Clarivate is now also making available the journal citation distributions on which they are based. This is a huge boost to the proposal made a couple of years ago by myself, Vincent Larivière and several prominent editors and publishers that citation distributions should be published by journals that advertise their JIFs.

Our proposal aimed to reduce the persistent and highly toxic influence of the unidimensional JIF on research assessment by giving authors and readers had a much richer picture of the real variation of citation performance within any given journal. It depended for its impact on journals following the recipe we provided in our preprint for generating the distributions from proprietary citation data in Web of Science or Scopus. Although a number of enlightened editors were quick to adopt the practice (e.g. at PNAS, Acta Cryst. A), it did not spread as far or as rapidly as we woudl have wished. However, now that the distributions are available ready-made from Clarivate*, there is no reason for any journal not to follow suit.

Citation Distribution

I hope that many will now opt to do so. Bianca Kramer (aka @MsPhelps) was quick off the mark publishing a couple of examples of citation distributions from well-known journals (see above). The enormous range of citation performance is immediately apparent.

Commendably, Clarivate have gone further and disaggregated research papers from reviews and other article types in the distributions. This is something the San Francisco Declaration on Research Assessment has long called for (see point 14 in the text of the declaration); it helps to further untangle the complexity underlying the JIF. Helpfully, Clarivate also now separately report the median citation rates of primary research articles and reviews. This reveals how highly-cited reviews inflate the indicator.

So I congratulate Clarivate on a positive move that provides the information needed for more responsible use of quantitative publication indicators. Of course, the goal of establishing robust research assessment processes that are free of the JIF has yet to be achieved. We need all journals that make any mention of their JIFs to also show Clarivate’s citation distributions. And we need researchers and research managers to start internalizing what they mean.

Even then there will be more to do. For one, Elsevier has recently made a big move into journal metrics with the publication of CiteScore, its alternative to the JIF; I hope that they too will start making the underlying citation distributions available.

I am not my h-index (or my JIFs)

Then there is the residual problem of that other unidimensional indicator, the h-index. In my view, any researcher who quotes their h-index should be expected to also show the underlying citation distribution. I have done this myself recently (using data gathered manually from Google Scholar – see above) as a way of showing that there are interesting and impactful stories to tell about my research papers irrespective of where they appear on the citation distribution.

Numbers are too powerful sometimes. In assessing any complex human activity like research, it is the stories, the narratives that provide the context essential to making an informed judgement.

 

*Important note about the use of Clarivate’s citation distributions: The initial press release didn’t mention the re-use rights are for the plots that subscribers can now find on Web of Science but Clarivate’s Marie McVeigh has now clarified this officially in the comment below. She confirms that journals can indeed publish these distributions (with attribution).

**For those familiar with R, Steve Royle has published a clever script that makes it easy to grab and h-index citation data from Google Scholar.

Posted in Academic publishing | 1 Comment

Opening peer review for inspection and improvement

ASAPbio Peer Review Meeting

For me the most memorable event at last week’s ASAPbio-HHMI-Wellcome meeting on Peer Review, which took place at HHMI’s beautifully appointed headquarters on the outskirts of Washington DC, was losing a $100 bet to Mike Eisen. Who would have guessed he’d know more than I did about the intergalactic space lord and UK parliamentary candidate, Lord Buckethead? Not me, it turned out.

Mike was gracious enough not to take my cash, though now I owe him multiple beers over the coming years. I doubt that the research community will have fixed peer review within that timeframe – there are no one-size-fits-all solutions to the multiple problems that were discussed at the meeting – but I did at least come away with a sense that some real improvements can be made in the near future. The gathering was a heady mixture of expertise, passion, ambition, blue-skies thinking and grey-skies reality checks.

I won’t attempt to summarise all of our deliberations. My brain no longer works that way and in any case the super-capable Hilda Bastian did it as we went along; (and there is also a brief report in Science). Hilda was one of several people I knew from the internet whom I met in the flesh for the first time in Washington. For all our technology, nothing can yet touch the levels of sympathetic engagement that comes from encountering your peers in the real world. It is still the best place for exchanging ideas — and for (ahem) exposing them to sobering correction.

After all was said and done, there were three ideas that I hope will endure and soon become more widely adopted. Each carries a modest cost but the benefits seems to me to be incontrovertible.

1. Open Peer Review: There are many definitions of ‘open peer review’ (at least 22 according to Tony Ross-Hellauer), but the one I have in mind here is the practice of publishing the reviewers’ anonymous comments, alongside the authors’ rebuttal, once a paper is finally accepted for publication. To my mind this degree of openness improves peer review because it incentivises professional conduct on the part of reviewers, thereby reducing the scope for sloppy work or personal attacks. It also makes the conduct of science significantly more transparent. While there was some concern that special interest groups operating in contentious areas of research like vaccines and climate science might derive ammunition by cherry-picking critical reviews, I am firmly of the view that the research community has to be ready to defend itself in the open. Closing the door on our proceedings and expecting the public to trust is will only fuel those who are already too quick to malign the scientific establishment as a conspiracy.

There are yet more benefits. Opening peer review helps to reveal the human and disputatious nature of progress in research. That will dispel the gleam if objective purity that sometimes clings, unhealthily, to the scientific enterprise. Being more open about how science works will build rather than undermine public trust. It will also help to burnish the reputation of journals that insist on rigorous peer review, and expose the so-called predatory journals that levy hefty article processing charges while providing no meaningful review.

Open peer review also paves the way for reviewers to claim credit for their work. This can be done anonymously via systems like Publons, which liaises with journals to validate the work of reviewers. Greater credit can of course be claimed if the reviewer agrees to sign their review, since this allows them to be cited and accessed more effectively (especially if they are assigned a DOI – digital object identifier). However, views at the meeting on whether reviewers’ names should be made public were mixed. There are concerns that early (and not-so-early) career researchers might pull their punches if reviewing the work of more senior people. And there are risks, as yet untested as far as I know, of possible legal action against reviewers who make mistakes or who criticise the work of litigious researchers or corporations.

The overheads associated with publishing reviews are not negligible. Some effort will be required to collate and edit reviews (e.g. to remove personal or legally questionable statements, though this should reduce as reviewers become accustomed to openness); and there are technical hurdles to the inclusion of publishing reviews in journal workflows. However, none of these is insurmountable, since several journals (e.g. EMBO Journal, PeerJ, Nature Communications) are already offering open peer review.

As Ron Vale wrapped up proceedings on the main day of the conference, I could sense him urging the room with every fibre of his being to recognise open peer review as an idea whose time has come. I think he’s right.

2. Proper acknowledgement of peer reviews by early career researchers (ECRs): Although the vast majority of peer review requests are issued to established researchers, in many cases the work is farmed out to PhD students and postdocs. When done properly, this can provide valuable opportunities for ECRs to learn one of the important skills of the trade, but too often their contributions are unacknowledged. Either the principal investigator does not bother to explain to the journal that the review is entirely or partially the work of junior members of their lab, or even if they do, the journal has no mechanism for logging or crediting that input.

It became clear at the meeting that this is an issue for ECRs and a sore one at that. Ideally of course, the fix would come from PIs being more transparent about how they handle their reviewing caseload, but a more effective solution would seem to be for journals to issue clearer guidelines – both to enable PIs to recruit ECRs to the task and to ensure that the journal formally recognises their contribution. Services such as Publons that offer to validate peer review contributions could also help out here.

3. Add subjective comments to ‘objective’ peer review: This is a counter-intuitive one. I am a fan of the ‘objective’ peer review established at open access mega-journals such was PLoS ONE, where the task of the reviewer is to determine that the work reported in the submitted manuscript has been performed competently and is reported unambiguously, but without any regard for its scientific significance or impact. But, as was pointed out in one of the workshops at the meeting, reviews adhering to these criteria may well be devoid of the full richness that the expert reviewer has brought to their close reading of the manuscript. It was therefore proposed (though Chatham House rules prevent me from crediting the proposer) that this expert opinion should be added to the written review. It should not form any part fo the decision to publish but inclusion of this additional information – on the significance of the study and the audiences that would be most interested in it, for example – would provide a valuable additional layer of curation and filtering for the reader.

This proposal may be tricky to implement because the editors of mega-journals already have enough trouble getting some reviewers to stick to the editorial standard of ‘objective’ peer review. But it does not seem impossible to add a separate comment box to the review for to ask the reviewer’s opinion – as a service to the reader – while making it clear that this will have no impact on the publishing decision. To me this is only a small additional ask of the reviewer, but one whose value is obvious. Such a move would also be a further barrier to the rise of the so-called OA predatory journals.

And finally… it would be remiss of me not to mention DORA, the San Francisco Declaration on Research Assessment which is endeavouring to wean the research community from the nefarious effects of journal impact factors. While the meeting was focused on peer review of research papers, it is also an important component of decisions about hiring, promotion and grant funding. The new steering group of DORA, of which I am now chair, were grateful for the opportunity to announce the reinvigoraton of the initiative and to discuss how DORA might help the community to develop more robust methods for research and researcher assessment.

And that’s it. Of course, you should feel free to offer peer review in the comments…

Posted in Academic publishing, Science | Tagged , , , | 1 Comment

Why I don’t share Elsevier’s vision of the transition to open access

Screenshot: Working towards open access

Last week Elsevier’s VP for Policy and Communications, Gemma Hersh, published a think-piece on the company’s vision of the transition to open access (OA). She makes some valid points but glosses over others that I would like to pick up on. Some of Elsevier’s vision is self-serving, but that should come as no surprise since the company has skin in the game and naturally wants to defend its considerable commercial interests.  And some of it is, frankly, bizarre. You can read the whole article for yourself – it’s not long. I would recommend also having a look at the reaction from the OECD’s Toby Green. Below, I have highlighted and commented  on (in blue) the portions that struck me, and tried to fill in some of the missing pieces of what is a very tricky puzzle.

The article opens constructively:

“Elsevier […] is thinking about how […] alternative access models tailored to geographical needs and expectations can help us further advance open access.”

  • ‘Tailored’ sounds like a euphemism. In part, it reflects consideration of differences in the research intensity of different nations (even in the developed world), which means that there would be winners and losers in a switch to a gold OA model funded by article processing charges (APCs); but there is no recognition of the constraints due to ongoing global inequalities. OA ameliorates that immediately as far as accessing the literature goes, though we need to think hard about how to create OA business models that address the challenges to authors from the global south.

“Elsevier and other STM publishers generally agree with many of the authors’ observations and recommendations, notably that there may be enough money in the system overall to transition globally to gold open access.”

  • How much money is ‘enough’? Readers should be aware that Elsevier has makes adjusted operating profit margins of around 37%. In 2016, according to the latest annual report of the parent company, RELX, this amounted to £800m profit on revenues of £2,320m for their science, technical and medical division. It’s no surprise that the company wants to protect their business. But that motive should be clear to all stakeholders, including academics and the public. Can publicly-funded researchers, who support high-profit publishers such as Elsevier, Wiley and Springer-Nature with their labour as authors, editors and reviewers, look the taxpayer in the eye and them they are delivering value for money?

“…One possible first step for Europe to explore would be to enable European articles to be available gold open access within Europe and green open access outside of Europe.”

  • This simply does not compute. It is a kind of double-speak that seeks to re-define unrestricted access – the original definition of open access – as restricted access, depending on your location. Hersh has defended this notion  as creative “outside of the box” thinking. Maybe so, but it’s also outside my comprehension.

Hersh then moves on to consider mechanisms for flipping subscription journals to OA:

“One successful model is the SCOAP3 program. A particularly powerful aspect of SCOAP3 (even if initially cumbersome to administer) stems from the detailed and systematic planning of the various ways in which money needs to flow through the system for journals to become exclusively gold open access. Money is carefully redirected from library budgets to a central pool administered by CERN, and from there to publishers in the form of APCs. […] Drawing on the principles of this program could help us all with the much broader challenge of transitioning all hybrid journals to become fully gold open access.”

  • The focus here, once again, is on money – and, in my view,  on preserving the status quo. SCOAP3 may well have shown that subscription journals may be flipped to OA but reaching agreement required complex and protracted negotiations and only worked because the deal was confined to a well-defined group of researchers with links to a single, large international facility. We are a long way from seeing how that might work in less focused disciplines. Hybrid OA (publishing OA articles in subscription journals) was originally proposed as a mechanism to fund flipping but it is a pathway that Elsevier seems not to recognise. Against accusations of “double-dipping” – in which hybrid OA journals keep subscription charges up even as the proportion of OA content grows – Elsevier has maintained that it simply doesn’t exist. Have they had a change of heart? 

“We believe that the primary reason to transition to gold open access should not be to save money (it won’t, and there will be winners and losers as costs are redistributed) but that it would be better for research and scholarship…”

  • Well, at least that’s (rightly) stated as a belief – an act of faith. It’s not a belief I share. There are many historical examples of new technology driving down costs and becoming available to the many, not the few: printing, telephony, cars, and digital cameras, for example. Admittedly in each of these cases a functioning market was required which is still lacking in scholarly publishing. The reasons for this are well known  and present a tough nut to crack, but if we are going to talk about money let’s also be up-front about profits and value for money. Stuart Shieber analytical post on the difference in value provided by commercial and non-profit publishers is illuminating on this point. It’s also worth remembering Elsevier’s tenacious defence of the Publisher’s Association decision tree, which to this day presents a incorrect (but revenue-raising) interpretation of the OA policy of Research Councils UK. 

“Advocates for a global transition to gold open access alone should be clear that an entirely gold open access system would cost more in some regions and for some institutions – especially those that are highly research intensive and therefore pay more in a “pay to publish” model – and that they consider this a price worth paying.”

  • To my mind this is an argument for getting governments and inter-governmental bodies to take a keener interest in these affairs – they are the major paymasters, after all.

“Another reason APCs would rise is that the money flowing into the current system from outside the academic research community – i.e., journal subscriptions from industry – is estimated to be about 25 percent of the total. In a “pay-to-publish model,” systemic costs would need to be borne by the academic research community rather than shared with industry.”

  • If about 75% of the total funding for publishing comes from universities & research institutes – public institutions for the most part – then this is yet another reason for governments to take the lead in not letting costs get out of control. There is a public interest at stake here, not least because of the close links between publicly-funded research and national policies on industry and innovation around the world .

The conclusions to piece contains this rhetorical flourish:

“A fully gold open access world inhabited only by predatory publishers who will publish anything as long as they are paid is not a healthy and prosperous world.”

  • For one thing, there is no serious prospect of a world “inhabited only by predatory publishers”. Such outfits, which scoop up APCs while providing no meaningful peer review, have gained purchase in some countries but are now feeling the heat of regulation – heat that will only increase as open peer review gains traction. Their negative impact is, in any case, arguable. Hersh also seems to be suggesting that cheaper open access journals are necessarily low-quality, but there are powerful counter-examples, such as PeerJ. (By the way, nothing is likely to me more effective at killing off predatory journals than evaluation systems that judge research papers on their intrinsic merits.)
  • For another, the talk about a “healthy and prosperous world” sounds a distinctly bum note after the earlier proposal to erect a great, golden paywall around Europe. There’s a striking contrast here with the G7 Science Communiqué, which was also published last week. We should of course always be circumspect about the pronouncements of politicians from the global stage but that document did at least articulate a vision to address global challenges and inequalities, and to use open science to bolster the robustness and utility of publicly-funded research.

And finally, we are left with the posturing (the italics are mine):

“The pace of change will ultimately be driven by researchers and the choices they make about how they wish to disseminate their research outputs. We can help them embrace open access by working closely with funders and research institutions to move beyond advocacy to possibility.”

    • Thus writes the commercial publisher advocate. Reader, beware.

 

Posted in Open Access | Tagged | 6 Comments

Does science need to be saved? A response to Sarewitz.

I wrote this piece a few months ago at the invitation of The New Atlantis. It was supposed to be one of a collection of responses to a polemical essay that they published last year on the parlous state of modern science by Dan Sarewitz. But the publication was delayed so, I have decided to go ahead and publish now. 

Update (30 Nov 2017): My response has now been published, alongside those of several other other correspondents and a reply from Sarewitz. These help to amplify and clarify some of the key points of the debate and make for very interesting reading, even if we have all yet to reach an agreed conclusion. 

Sarewitz article

In an essay published in The New Atlantis last year under the provocative title ‘Saving Science‘, Dan Sarewitz pulled no punches. He took exception to the post-war settlement based on Vannevar Bush’s 1945 claim that “Scientific progress on a broad front results from the free play of free intellects, […] in the manner dictated by their curiosity for the exploration of the unknown.” To Sarewitz, who is Professor of Science and Society at Arizona State University, the “great lie” of the power of curiosity-led inquiry has corrupted the scientific enterprise by separating it from the technological problems that have been responsible since the Industrial Revolution for guiding science “in its most productive directions and providing continual tests of its validity, progress, and value.” “Technology keeps science honest,” is Sarewitz’s claim. Without it, science has run too great a risk of being “infected with bias,” and now finds itself in a state of “chaos” where “the boundary between objective truth and subjective belief appears, gradually and terrifyingly, to be dissolving.”

Those are bruising words. Sarewitz has some important points to make about the interaction of science with the outside world (a theme he returned to in a more recent Guardian article), but the fevered rhetoric of ‘Saving Science seemed to me to dull his analytical edge.

Sarewitz is right to draw attention to the complex interplay between science and technology; and to the energising effects on science of the demands of governments, industry, and commerce to make solve problems. These interactions are under-appreciated in some scientific quarters. He raises valid questions about the publish-or-perish culture within academic research that yields too work that is uncited work or of questionable reliability. And his criticism of the tendency in biomedical research sometimes to fixate on animal models at the expense of progress in clinical research hits some valid targets.

In the end, however, Sarewitz overplays his hand. Technology has certainly been a powerful driving force in scientific productivity. Yes, it can keep science honest because there is no better test than a product, a process or a medical treatment that just works. But in Sarewitz’s telling, curiosity-driven research has produced just two fundamental advances of transformational power in the last century: quantum mechanics and genomics. To do so, he has to overlook a plethora of blues skies breakthroughs, such as the theory of evolution, antibiotics, plate tectonics, nuclear fission and fusion, the X-ray methods that cracked the structures of DNA and proteins, and monoclonal antibodies, that have been profoundly influential culturally and economically. At the same time, he underplays the stringency of the reality check that experiment and observation place on the free play of free intellects. It seems to me that both roads make for interesting journeys, even if can often be difficult to decide which is truly the more rewarding.

Sarewitz has every right to question how far scientists should be permitted to roam free from the demands of the societies that fund them But I can’t accept his prognosis science must be directed because, apart from a couple of particular examples that both involve management by the military, he doesn’t say how.

The oversight of science is quite properly a preoccupation of governments – as major funders – even if it raises perennially contentious issues of freedom and responsibility for the research community. But Sarewitz’s prescription of management by technology to keep science honest is too simplistic, for reasons that emerge – unconsciously? – in his discussion of ‘trans-science’. To Sarewitz, trans-science is research into questions about systems that are too complex for science to answer – things like the brain, a species, a classroom, the economy, or the climate. But missing from this list is science itself, and the social, political, and industrial ecosystems in which it operates. Unarguably, these are issues and phenomena of huge complexity and importance.

So, how are we to figure out how best to make science work? This remains an important question for all of society. Polemic may be great for stirring debate, but the answer lies in the closely argued detail. I suggest we proceed on all sides by respecting the evidence, acknowledging our limitations, and renewing our determination to improve the connections between science and the world beyond laboratory walls.

Posted in Science | Comments Off on Does science need to be saved? A response to Sarewitz.

BAMEed: the voices of the people

At the beginning of June I attended the first BAMEed conference. It was an unexpectedly memorable and inspiring occasion.

BAMEed Conference 2017
Final panel discussion at #BAMEed2017

Though billed as an “unconference” – a sort of self-organising gathering that fills old fogies like me with horror – the one-day meeting had in fact been meticulously planned. It was the brain-child of a newly-formed group of Black, Asian and Minority Ethnic educators (hence ‘BAMEed’) – Amjad Ali, Allana Gay, Abdul Chohan and Penny Rabiger – all of whom are or were teachers at primary or secondary schools. The attendees were themselves mostly primary or secondary school-teachers and mostly also black, asian or minority ethnic. I had the unusual and instructive experience of being in the minority.

The theme of the meeting was unconscious bias, a topic that has been moving up the agenda in higher education thanks in part to the ascendancy of the Athena SWAN charter. I know because I have attended a lunchtime workshop on it. But here, the subject was treated in greater depth, not least because of the breadth of lived experience that the speakers brought to it. Four weeks on I am still absorbing the many lessons from that sunny Saturday in Birmingham, but let me offer a few highlights.

First up was keynote speaker Professor Dame Elizabeth Anionwu. She told the story of her complicated upbringing as the unexpected child of Cambridge students, a Nigerian man and a white Catholic woman. With wit and candour she spoke of the cruelty she endured from nuns in her care home and from a drunken step-father, of discovering and getting to know her Nigerian father, and of the sheer bloodymindedness that propelled her through careers in nursing and academia.

Professor Damien Page* followed-up with a talk on unconscious bias training in schools and universities, cautioning that it has to be done thoroughly if it is not to be counter-productive. Most training communicates that biases are ‘naturalʼ and therefore risks providing people with excuses to discriminate: “everyone is biased, so I shouldnʼt be worried that I am too.” It is important not to distract people from structural and historical discrimination.

For me one of the most interesting presentations was given by Professor Paul Miller, who introduced the notion of “white sanction”, a concept he has developed in a recent paper surveying the experience of black and minority ethnic staff in secondary and higher education.

“White sanction” expresses the idea that BAME staff need white allies (or champions), or feel obliged to ‘join the clubʼ, to progress in their careers. Such allies can play a valuable role, at least in the transition to true equality, but Miller recognised the difficulty of the position. Some members of the audience expressed understandable resentment at the notions that allies or conformity are needed since these seem to legitimise existing power structures. In the view of many, the existence of “white sanction” is seen as a symptom that we lack a functioning meritocracy. I agree, though I’ve certainly come across dissenting views in academia. Miller argued itʼs better to focus on the structural problem to avoid the risk of calling out all whites as racist and putting even ‘alliesʼ on the defensive. His paper explores these ideas in more detail, where he defines four levels of institutional interaction with BME staff: engaged, experimenting, initiated, uninitiated. I wondered on Twitter where on the spectrum Imperial College, my own institution, might lie. Though I only got two answers, the indications are that we have work to do.

In the afternoon, Dr Christine Callender noted the glacial pace of change in opportunities for ethnic minorities by citing a call for action in the 1985 Swan Report on BME education, and a more recent one in a pithy article by Kara Swisher earlier this year that bemoaned the “mirror-tocracy” perpetuated by white men in the upper echelons of tech giants like Uber and Google. Real change has to come from engagement the top, insisted Callender, if institutions are to convert aspirational statements into action.

The plenary sessions were interspersed with short workshops – I attended the ones on recruitment and building diverse teams, led in each case with informed passion by Patrick Ottley-O’Connor and Hannah Wilson – and the day was rounded off back in the main hall with a panel discussion (see photo above).

As valuable as the talks were, the most memorable aspect of the meeting was the hearing the stories of teachers who have been short-changed by the status quo. None spoke intemperately, even in cases where there was just cause. Instead, there was a gritty and positive determination to tackle the problem head on, so that we all might do the right thing. Of course, none of this is new to the people involved, but it was a powerful reminder to me of the value of unfiltered testimony. The BAMEed Network has no need of my approval, but they have it anyway.

 

*Who bore an uncanny and distracting resemblance to my younger brother. 

 

Posted in Teaching | 2 Comments

University rankings are fake news. How do we fix them?

This post is based on a short presentation I gave as part of a panel at a meeting today on Understanding Global University Rankings: Their Data and Influence, organised by HESPA (Higher Education Strategic Planners Association).

HESPA University Rankings Panel - May 2017Yes, it’s a ‘manel’ (from the left: me, Johnny Rich, Rob Carthy). In our defence,  Sally Turnbull, who was chairing, sat off to one side and two participants (one male and one female) had to withdraw at short notice. Photo by @UKHESPA (with permission).

The big news on the release of the Times Higher Education World University rankings for 2017 was that Oxford, not Caltech is now the No. 1 university in the world.

According to the BBC news website, “Oxford University has come top of the Times Higher Education world university rankings – a first for a UK university. Oxford knocks California Institute of Technology, the top performer for the past five years, into second place.”

Ladies and gentlemen, this is what is widely known as ‘fake news’. There is no story here because it depends on a presumption of precision that is simply not in the data. Oxford earned 1st place by scoring 95.0 points, versus Caltech’s 94.3. (Languishing in a rather sorry sixth place is Harvard University, on 92.7).

The central problem here is that no-one knows exactly what these numbers mean, or how much confidence we can have in their precision. The aggregate scores are arbitrarily weighted estimates of proxies for the quality of research, education, industrial contacts and international outlook. And they include numbers based on opinions about institutional reputation.

In all likelihood these aggregate scores are accurate to a precision of about plus or minus 10% (as I have argued elsewhere). But the Times Higher (and most other rankers – I don’t really mean to single them out) don’t publish error estimates or confidence intervals with their data. People wouldn’t understand them, I have been told. But I doubt it. That strikes me rather as an excuse to preserve a false precision that drives the stories of year on year shifts in rank even though they are, for the most part, not significant.

Now Phil Baty, the editor of the Times Higher Rankings (and someone who, to give him his due, is always happy to debate these issues) is stout in his defence of what the Times Higher is about. A couple of months ago he wrote in an editorial criticising the critics of university rankings:

“beneath the often tedious, torturous ad infinitum hand wringing about the methodological limitations and the challenges of any attempt to reduce complex universities to a series of numbers, the single most important aspect of THE’s global rankings is often lost: the fact that we are building the world’s largest, richest database of the world’s very best universities.”

But who can define ‘best’? What is the quantitative measure of the quality of a university? Phil implicitly acknowledges this by conceding that “there is no such thing as a perfect university ranking.” I would ask, is there one that is good? Further, if the point is to assemble a database, why do the numbers in the different categories have to be weighted and aggregated, and then ranked? Just show us the data.

The problem, as is well known, is that these rankings have tremendous power. They are absorbed by university managers as institutional aims. Manchester University’s goal, for example, stated right at the very top of their strategic plan is “to be one of the top 25 research universities in the world”.* How else is that target to be judged except by someone’s league table? In setting such a goal, one presumes they have broken down the way that the marks are totted up to see how best they might maximise their score. But how much is missed as a result? Why not be guided by your own lights as to what is the best way to create a productive and healthy community of scholars? Surely that is the mark of true leadership?

Such an approach would enable institutions to adopt a more holistic approach to what they see as their missions as universities. And to include things that are not yet counted in league tables, like commitment to equality and diversity, or to good mental health, or – in these troubled times when we are beset on all sides by fake news – to scholarship that upholds the value of truth.

A couple of years ago, a friend of mine, Jenny Martin, who is a Professor at Griffith University in Australia suggested some additional metrics to help universities complete the picture. For example:

How fair is your institution – what’s your gender balance?
How representative are your staff and student bodies of the population that you serve?
How much of their allocated leave do your staff actually take?
How well do you support staff with children?
And… How many of your professors are assholes?

Now, Jenny may have had her tongue in her cheek for some of these but there is a serious point here for us to discuss today. How often do rankers think about the impact on the people who work in the universities that they are grading?

I would argue that those who create university league tables cannot stand idly by (as bibliometricians used to do), claiming that they are just purveyors of data. It is not enough for them to wring their hands as universities ‘mis-use’ the information they provide.

It is time for rankers to take some responsibility. So, I call for providers to get together and create a set of principles that governs what they do. A manifesto, if you will, very much in the same vein as the Leiden manifesto introduced in 2015 by the bibliometrics community.

To give you a flavour, the preface to the Leiden manifesto reads:

“Data are increasingly used to govern science. Research evaluations that were once bespoke and performed by peers are now routine and reliant on metrics. The problem is that evaluation is now led by the data rather than by judgement. Metrics have proliferated: usually well intentioned, not always well informed, often ill applied. We risk damaging the system with the very tools designed to improve it, as evaluation is increasingly implemented by organizations without knowledge of, or advice on, good practice and interpretation.”

What is true of bibliometrics is true of university ranking. Therefore I call on this community here today to take action and come up with its own manifesto. Since we are in London, we could even call it the London manifesto. (After Brexit, we’re about to become the centre of nowhere and nothing, it would be nice to have something for people to remember us by!)

I stand ready to help with its formulation. I urge you to consider this seriously and quickly. Because if providers won’t do it, maybe some of us will do it for you.

Thank you.

 

A couple of afterthoughts on the meeting:

It was noticeable that the rankings provider who spoke after the panel addressed more of the technical shortcomings and cultural issues of university league tables than those who presented earlier in the day. It is important to keep the debate on rankings and university evaluation alive.

I was surprised that there were relatively few questions after each talk from the audience, which consisted mostly of people involved in strategic planning at various universities. I hope that doesn’t indicate a certain degree of resignation to the agenda-setting power of rankers and, as a result, a reluctance to consider the broader impacts. But I remain concerned. In answer to my question about why one of the providers had bemoaned the fact that some university leaders rely too heavily on rankings, I was told – candidly –  that in some cases he felt it was a matter of poor leadership.

I was struck by an example mentioned by my co-panellist, Rob Carthy, from Northumbria University which pointed out one of the perverse effects of rankings. His university works hard to select and recruit Cypriot students even though they often only do one A level (a feature of the school system). In doing so, however, the average A level tariff of their intake drops which, on some league table measures, will reduce their score. The rankings therefore disincentivises searches for student talent that look beyond mere grades. I suspect they may also be reducing the motivation of some universities to widen participation.

 

*To be fair to Manchester, on this web-page the phrase appears to have been edited to read: “Our vision is for The University of Manchester to be one of the leading universities in the world by 2020.”

Posted in Science, Scientific Life | Tagged , , | 2 Comments

The Cathedral on the Marsh

I’ve already shared this video on Twitter and Facebook but wanted to post it here as a more permanent record. Two weeks ago I fulfilled the ambition, held since I had seen Nic Stacey’s and Jim Al-Khalili’s quite wonderful BBC documentary on thermodynamics, to visit the steam engines at the Crossness sewage pumping station. Three of the four engines stand idle, in rusted silence, while the fourth – oiled and glossed with fresh paint – huffs and shucks with mechanical life. With my iPhone camera I tried to capture some of the poetic rhythm of its motion.

I also took some photographs, which you can find on flickr.

There is something noble in creating a vaulted cathedral for these magnificent engines, which did such foul work. I was reminded of a line from Kenneth Clark’s television history of Western art in which he describes the grand rooms within the old naval hospital at Greenwich (a few miles upstream of Crossness) and pauses to reflect: “What is civilisation? A state of mind where it is thought desirable for a naval hospital to look like this and for the inmates to dine in a splendidly decorated hall.”  The hospital opened its doors in 1712 but was converted into the Royal Naval College in 1873, just a few years after Crossness itself started pumping.

I shall have to pay it a visit – that can be my next ambition.

Posted in History of Science, Science | Tagged , , | Comments Off on The Cathedral on the Marsh

The March for Science: advocacy masterstroke or PR misfire?

Last night made my way to an upstairs room at The Castle pub near Farringdon to participate in a debate organised by Stempra on the forthcoming March for Science.

March for Science debate
The panel (Photo by Anastasia Stefanidou)

The question before the panel and the assembled audience was whether the call to arms, first issued by scientists in the USA but now heard and answered across the globe, is an “advocacy masterstroke or PR misfire”. In truth, there was not much actual debate, though there was plenty of discussion. The panel, consisting of myself, Fiona Fox (director of the Science Media Centre), and environmental science writer and campaigner Mark Lynas, broadly agreed that the march is a good thing, albeit for different reasons and with different qualifications – and we’ll all be marching in London on Saturday. Here below, for what they are worth, are the opening remarks that I prepared.

For other interesting takes on the March for Science, I can recommend this article by Michael Halpern and Ed Yong’s interview with Hahrie Han, a social scientist who studies activism.

“When I first heard about the March for Science in London I was extremely lukewarm about the whole idea. 

I could see the point of Americans marching for science, given the threats posed by the Trump administration – both to evidence-based policy on issues like climate change and vaccination, and to the science budget. 

Those threats are hard to judge given the wayward nature of Trump’s administration. He has shown himself to be ham-fistedly ineffectual, both on his travel ban from countries with large Muslim populations, and the attempt to overturn Obamacare. Even so, if I were working in the USA right now, I would not have hesitated to sign up in support because there are specific policies to protest. 

But I did wonder about the spill-over of the protest to the UK and the rest of the world. I could see why the March for Women, initiated in the US also in reaction to Trump, had rapidly spread to the rest of the world – because the issues of women’s rights and gender equality are ones that are very much alive – in many forms – across the globe.  

But although I am a scientist – and therefore very pro-science! – I didn’t immediately see the point of the march for science in the UK. (I can’t speak for the rest of the world). 

It’s not that I’m one of those researchers whose focus is entirely on my research. I’ve campaigned before for science. I was one of the founder members of Science is Vital and helped organise our rally to protest against the threat of cuts to the science budget back in 2010 – and on several other occasions since then. 

We’re a very small and entirely amateur organisation (though hopefully not too amateurish!), so we are mindful to concentrate our efforts. And to do that we rather deliberately concentrate our messaging, which has almost always been about making the case for public investment in R&D. And to that end, we have tried to tailor our campaigns so that they are heard by politicians and I think we’ve had some success in doing that. 

But what is the point of the march for science? The political point, I mean? What are the aims? As stated on the web-site, these are:

“The March for Science celebrates publicly funded and publicly communicated science as a pillar of human freedom and prosperity. We unite as a diverse, nonpartisan group to support science that upholds the common good, and political leaders and policymakers who enact evidence-based policies in the public interest.”

Well, who could be against that? The organisers’ statement is at once the march’s greatest strength and greatest weakness. 

It’s broad enough and vague enough to accommodate all-comers. Hopefully (and I am mostly a hopeful person), it is roomy enough to find place for the diversity of voices within the scientific community and those who feel peripheral to it or even excluded (though that could well be wishful thinking on my part). I know there has been lots of discussion about failings on diversity and inclusion by the US organisers but, the UK organisers appear to me to have done a better job. There seems to me to be a decent mix of speakers for the London march and I hope that from them – and from the placards brought by marchers – a diversity of voices and viewpoints will be heard. We’ll see…

But by being so broad and so vague, the organisers avoid asking some of the hard questions that come up when the gears of science and politics mesh (or crash, depending on who’s driving). On their web-site the organisers have mentioned to the concerns about the emergence of “post-truth” or “post-fact” types of public discourse in the EU referendum and in Trump’s election campaign. That’s certainly worth protesting, but the follow-up questions are hard. 

For example, should we regulate the media more forcibly than we do to ensure that they are factual? Or does that play into the hands of those who are all too ready to cry “fake news”? Should we insist that scientists volunteer to do more spots on the Today programme? How far should scientists stray into the political domain, and how far outside their field of expertise should they be allowed to speak? Does speaking out compromise their scientific integrity? It certainly opens them to attack – as any climate scientist will tell you. 

I think all of these questions have answers (and I hope we’ll get to some of them in the discussion). The answers aren’t easy and they involve issues that we probably need to keep constantly under review. 

I don’t think these questions will be addressed in the march on Saturday. Marching is a rather blunt instrument for democratic discourse. 

And these aren’t even all the questions that we need to be asking about science. I think there’s another debate to be had about how much the public should be enabled to influence the science that they fund. That was one of the disconnects that emerged most strongly in my mind in the painful aftermath of the EU referendum. To me, inequality is the single most important issue facing Britain today – in education, in employment, in healthcare – and it is one that is not being touched by all the hullaballoo over Brexit. And yet it once was. It was one of the issues that seeded movements like Science for the People in the US or the British Society for Social Responsibility in Science, which brought scientists and the public together in the 1970s to try to find ways to do science that was relevant to people’s lives. (For more background on these movements, see Alice Bell’s article in Mosaic).

Now, I’m not suggesting that science doesn’t touch people’s lives in many different and important ways today – clearly it does, and that’s worth shouting about. But there is a sense of powerlessness out there, felt by many people. And I wonder what science and scientists can or should do to address that? 

So, I see the march for science as that quintessentially scientific thing: an experiment. An experiment that may well fail, that may have to be repeated with improved methods, or to test a modified hypothesis. But an experiment that is still worth trying because it makes you think and, I hope, will get people talking.” 

 

Posted in Science & Politics | 2 Comments

Grim resolve at the House of Commons on the scientific priorities for Brexit

On Tuesday morning last week MPs, MEPs, and representatives of various organisations with a stake in post-Brexit UK science gathered in the Churchill Committee room at the House of Commons for the launch of  the “Scientific priorities for Brexit” report, published by Stephen Metcalfe MP, chair of the Commons select committee on science and technology.

Science Priorities for BrexitVenki Ramakrishnan, President of the Royal Society, addresses the meeting in the Churchill Room.

The report, released under the auspices of the Parliamentary and Scientific Committee, is the result of widespread consultation with universities, learned societies, campaign groups and industrialists. It pulls into sharp focus the principal challenges raised by Brexit and sets out priorities for the British negotiators who will have to hammer out an exit deal with the EU over the next 18 months, and for the government’s domestic R&D policy.

The recommendations are split into four key areas. First and foremost are the rights of EU nationals working in the UK R&D sector – at all levels.

It was heartening to see people identified as the central concern. However, the recent failure of Parliament to pass an amendment to the Article 50 bill which would have guaranteed their residency rights nationwide hung over the room like a grey cloud. It was in my opinion a callow piece of political miscalculation,  a view that I do not hold alone. At the meeting Labour MEP Clare Moody relayed the views of UK nationals living in the EU whom she’d spoken to. All were clear in their belief that it would in fact be their interests  for the UK government to behave honourably with regards to EU nationals living in Britain.

The remaining three areas are investment in research and innovation, collaboration and networks, and regulation and trade. In brief, the report emphasises the importance of finding some way to maintain access to EU funding mechanisms, the immense value of the multilateral collaborative networks that the EU is so good at fostering, and the need to ensure that the UK remains at the leading edge of developing regulatory frameworks to facilitate trans-national science and trade.

I won’t go into further details because the summary statement is succinct (and supported by a longer evidence report) – and because I have covered some of this ground before. But I want to reiterate the point that the government’s current and welcome commitment to underwrite UK participation in the EU’s Horizon 2020 research programme will not be sufficient to protect UK-based researchers looking to forge long-term collaborations in the EU. This is because their European colleagues cannot have confidence in the ability of their British partners to continue to collaborate on funding applications made post-Brexit. The Chancellor’s announcement in the Autumn statement of significant new money for UK research and innovation over the next four years is certainly encouraging – as indeed is the CBI’s new-found enthusiasm for a total UK R&D spending target of 3% of GDP (re-iterated at the meeting) – but the government needs to go faster and further. In particular, it needs to articulate clear plans for shoring up Britain’s international research prospects beyond 2020.

Those prospects seem to take a hit every time one of the more thoughtless Brexiteers opens his mouth. Boris Johnson has been upbraided for his wayward rhetoric on Britain ‘liberating’ itself from the EU. And this past week Bill Cash MP surprised the European Scrutiny Select Committee by suggesting that UK negotiators should remind the EU of the cancellation of German debt after World War 2. Such arrogant pronouncements, coupled with the fact that Brexit has been dubbed ‘Empire 2.0’ in the corridors of Whitehall, serve only to confirm the impression that Brexit is driven by an ignorant and backward-looking vision of Britain and its history. They are squandering the good will forged over many years thanks research collaborations and many other joint activities between Britain and the EU.

But it’s not just the image of Brexit, put about by the likes of Johnson and Cash, that is the problem. While it was encouraging last Tuesday to see politicians come together from across the political spectrum to fight for science in the forthcoming negotiations, the mood in the plush surroundings of the Churchill Committee room was one of grim resolve. The difficulties we face are deep-rooted because of the way that Brexit rips through the international fabric of science, plunging us in a direction that takes us away from our colleagues, from our collaborations, and from greater purpose. I thought by this stage I would be less angry. But I’m not, and this fight goes on.

Posted in Brexit, Science & Politics | Comments Off on Grim resolve at the House of Commons on the scientific priorities for Brexit