Disruptive publishing

To build a successful career in scientific research you need to understand the scientific publishing system. It is going through a period of change and innovation but has remained largely intact. Recently I and a colleague ran some ‘Disruptive Publishing’ coffee break sessions to highlight some of the changes in science publishing to our researcher community. I produced a factsheet summarising interesting journal developments and my colleague created a colourful ‘snapper’ that gave us a way to open up conversations with unsuspecting researchers.

Disruption

Wikipedia defines disruptive innovation as “innovation that creates a new market or value network and eventually disrupts an existing market and value network, displacing established market-leading firms, products, and alliances” and it notes that “disruptive innovations tend to be produced by outsiders rather existing players”.

Michael Clarke, in his 2010 Scholarly Kitchen blogpost, pointed out that Tim Berners-Lee created the Web in 1991 with the aim of “better facilitating scientific communication and the dissemination of scientific research … [it] was designed to disrupt scientific publishing”.

Clarke observed however that there had as yet been no significant disruption; change and innovation yes but not disruption. He was writing in 2010 but that is still true. The main players – the large multinational commercial publishing houses – still dominate science publishing.  The biggest open access publisher is one of the top four science journal publishers – SpringerNature.

Image source: http://blogs.plos.org/plos/files/2018/04/Power-to-the-Preprint_blog-image-002.png

Publishing

Clarke went on to list five different functions of the publishing system and suggested that the functions which are more ‘social’ – validation, filtration and designation – are less amenable to disruption than those that are more administrative – dissemination and registration.

We are starting to see more change in those areas. Validation (peer review) has been shaken by the advent of megajournals such as PLOS ONE and Scientific Reports. These journals are based on the idea that articles should be published if they are scientifically valid, leaving aside issues of novelty or newsworthiness. Notions of peer review are also being stretched by increasing posting of preprints (e.g. bioRxiv) and publishing platforms like Wellcome Open Research.

Filtration (deciding what is worth reading) is still heavily influenced by journal branding and thus by editorial selection, but altmetrics provide another tool while work on recommendation engines is proceeding in several companies. Machine learning approaches to search tools are also starting to appear.

Registration (research assessment) is perhaps the thorniest problem, and typically journal branding remains important when assessing a portfolio of research. Many institutions and funders have signed up to the principles of DORA but fewer have taken steps to put them into practice and practically problems remain. It is good to see DORA taking steps to engage more with the research community and spread good practice.

Looking further ahead, some believe that blockchain-like solutions may play a part in transforming scholarly publishing.

Coffee break sessions

We don’t have a revolution in publishing yet, but there is plenty of change and innovation and I think it’s hard for a busy researcher to be on top of everything that’s going on. Our coffee break sessions and the factsheet about Disruptive Publishing  were intended to brief researchers about some of the more interesting developments. During five separate sessions (one on each lab floor plus one on the ground floor) we talked to more than 50 people.  The topics that generated most interest were

  • the scooping policy of PLOS
  • preprints and bioRxiv
  • publishing different research outputs, not just articles
  • Frontiers for Young Minds – the science journal for school students.

The ‘snapper’ or ‘fortune teller’ that my colleague created provided a useful gambit to start conversations. We asked people to choose a number between one and eight and then talked about the issue corresponding to that number in the snapper.

The Disruptive Publishing snapper, created by Kate Beeby. @ka_be

It was definitely worth running these sessions but we learnt that the only way to make them work was to ‘ambush’ people while they were making a cup of coffee or washing their cup, or just walking along. They were nearly all interested to talk to us and to learn about what we had to tell them.


Factsheet

Journals started in 1665 with the Royal Society. The format of research articles changed little – fonts changed, different languages gained the ascendant, colour started to appear. But the outline of a journal article remained instantly recognisable. There was a big change in delivery in the 1990s when the internet came into play but not much underlying change. The growth of open access in the past decade and the advent of new online-only publishing ventures has accelerated the pace of change.

Pure OA journals

Journals like PLOS Biology, Nature Communications, eLife and … publish only fully open access articles. By publishing in these journals you can be sure that all OA obligations are met, and your research is as open as possible (but you still need to be sure to choose the CC BY licence). PLOS Biology will consider for publication manuscripts that confirm or extend a recently published study (“scooped” manuscripts, also referred to as complementary).

Megajournals

These journals typically have a wide subject scope and focus on ‘technical soundness’, rather than criteria such as ‘importance’ and ‘interest’. The two leading examples are PLOS ONE and Scientific Reports.  They are the two largest journals of any kind, each publishing over 20,000 articles p.a. Most other megajournals are somewhat smaller, but still focus on soundness as the main criterion for publication.

Peer review

Several journals have developed new systems to improve peer review. The Frontiers in journals have an initial independent review phase followed by a second phase in which reviewers and author interact. Reviewers’ names are published alongside the article. eLife delivers fast peer review decisions and consolidates all revision requests into one set of revisions. Post-review decisions and author responses for published papers are available for all to read. F1000Research publishes all submissions as preprints and then invites referees to judge the papers. Their reports and names are published alongside the article, together with the authors’ responses and comments from registered users. Wellcome Open Research follows a similar process. There is a Crick gateway on Wellcome Open Research.

Preprints

bioRxiv is a free online archive and distribution service for unpublished preprints in the life sciences. Authors are able to make their findings immediately available to the scientific community and receive feedback on draft manuscripts before they are submitted to journals. Most funders now accept preprints in grant aplications. There is a Crick channel on bioRxiv highlighting our preprints.

Publishing different outputs

Wellcome Open Research accept a wide range of submissions, including software, data notes, study protocols, negative or null studies, replication and refutation studies. BMC Research Notes publishes scientifically valid research outputs that cannot be considered as full research or methodology articles.  The Research Ideas and Outcomes (RIO) journal  publishes all outputs of the research cycle, including: project proposals, data, methods, workflows, software and project reports. The Journal of Brief Ideas publishes citable ideas in fewer than 200 words. Science Matters publish single, validated observations – the fundamental unit of scientific progress.

Data journals

These journals publish data papers –papers that describe a particular online accessible published data set, or group of data sets. Examples include Scientific Data, Gigascience, Data in Brief.

 Protocols

The protocols.io website allows you to create, copy, modify and evolve laboratory protocols, describing the critical details of experimental procedures that are often overlooked in articles Methods sections.

Young audiences

Frontiers for Young Minds is an open access science journal aimed at school students. It invites scientists to write about their research in language that is accessible for young readers. Articles are reviewed before publication by a board of kids and teens – with the help of a science mentor.

Impact factors

Some journals, notably eLife, ignore journal impact factors and do not use them in promotion. The San Francisco Declaration on Research Assessment (DORA) is developing and promoting best practice in the assessment of scholarly research, and argues against the use of journal-level metrics like the impact factor.

Posted in Journal publishing, Open Access | 1 Comment

The meaning of sixty

I recently celebrated my sixtieth birthday. I had a very nice birthday party in a local pub with several friends and family members. Having plied them with food and drink I thought I’d earned the right to give a short homily about being sixty.  Here it is. At the end are a few photos from the evening.

Sixty doesn’t mean what it used to mean. I’m not sure if that is because the world has changed its view of 60, or because I have gained a different perspective of the age now I am there. It’s probably a bit of both.

As a number, rather than an age, sixty has a certain attraction. I recall learning that it was the foundation of the Bablyonian number system. It seemed an awful big number to be that. I believe that’s why we have 60 seconds in a minute, and 60 minutes in an hour.

It’s clearly a largish number, though not as large as 100.

Old age

I was a late baby, the sixth and last child in the family, arriving when my parents were in their early forties.  Hence in my last year or so at home my parents were turning 60. I regarded 60 as quite old back then. The phrase ‘old age pensioner’ was common and since my father retired at 60 I guess that was what he was, officially. I looked at people who were 60-years old, like my parents, and I saw a huge gulf between my young age and their great age.

Mountains

One of my best ever holidays was a trip to Pakistan, 17 years ago. I traveled to the Northern Areas, a mountainous region, and did a few guided treks up into the mountains. Rising at dawn and watching the morning sun light up the snowy peaks one by one was a magical experience. Having a cold shower with water piped straight from glacier melt was also memorable.

hunza-sunrise

Eagles Nest Hunza, by @thehunza #GilgitBaltistan #pakistan #tripkar #beautifulpakistan # sunrisepic.twitter.com/GlVc2LcZVR

When you’re at the bottom of a mountain you look up and it seems far off. When you get to the top your perspective is different. You’ve travelled a continuous route from the bottom to the top and perceive them as linked. You can look down at the route you travelled to get there, and you can gaze over a vast area – taking it all in in one view. It feels great to be there. It’s the same with age.

Thailand

30 years ago I learnt some more about the age of 60. I took a holiday in Thailand and by coincidence I was there in Bangkok at the time of the King of Thailand’s 60th birthday – King Bhumibol who died last year. There were massive celebrations and crowds of people on the streets. I discovered that in Thai tradition 60 years is seen as a very significant age – the sexagesimal person has acquired wisdom and experience, survived the slings and arrows of 60 years, and still has plenty of life left in them. It is an age to be proud of and to celebrate. I thought – “That sounds good! I must try it one day”.

royalpalace

The Royal Palace in Bangkok, Thailand. 

China

Sixty is seen as significant because on that birthday you have been around the Chinese zodiac with its 12 animal years 5 times – one for each of the 5 elements. In Japan this birthday is called kanreki and to celebrate this special occasion, the 60-year-old person wears red clothes.  The tradition of wearing red on your sixtieth birthday is also a Chinese thing. A Chinese friend tells me I must wear red every day for 365 days from my 60th birthday. That’s the reason for the red theme tonight. By getting you all to wear red tonight I hope I can have a few days off.

Reaching 60

I have now been 60 for nearly 5 days. I don’t think it’s going away so I’d better make the most of it. I have picked up my first free prescription. I have registered for free travel on the tube etc. I will have a free eye test soon. I don’t feel all that different from before, but I feel a certain smug satisfaction at having got here, to the top of the mountain.  I am a bit surprised too – part of me feels I’m still 18, part of me feels like 30 or 40 (but not my knees). It does seem incredible that I am actually 60 but I don’t feel remotely like what the term ‘old age pensioner’ summons up.

They say that 60 is the new 40, but I’m not sure what that would mean for our timekeeping devices.  I’ll settle for being 60 but now having a better understanding that all ages are really the same in what really matters – enjoying life, expressing love, taking some control of life, using your talents well.

I wish all of us a long and happy life.

party

party2

cake

cake2

Posted in Uncategorized | 2 Comments

Blog June

Apparently BlogJune is a thing. I’d not heard of it before – it’s a challenge to

blog every day in June – or as often as you can manage, or comment on someone else’s blog every day

The first part (‘every day’) really would be a challenge, but the qualification (‘or as often as you can’) sounds a lot easier. I’m not sure about commenting on someone else’s blog, as that could entail a great deal of reading before I found something that inspired me to comment sensibly.

One of my colleagues alerted me to this challenge last week. She has bravely risen to the challenge but I’m less brave. I only spotted her tweet about it at the end of the day on 1st June so a strict compliance with the challenge was not really on for me. But, hey, maybe I can rise to the lesser challenge. It might inspire me to post a bit more.

I see that Athene of this parish has written recently about writer’s block – a phenomenon where a normally prolific writer, such as she, has a dry patch. My experience is the inverse – most of the time I write nothing but occasionally I manage to finish a blogpost and get it online.

Maybe more regular posting during BlogJune will get me to the point where I can call my standard taciturnity a writer’s block?

 

Posted in Blogology | Tagged | 1 Comment

Digital skills – how do we …

The concept of digital skills is a bit slippery.  The term has changed its meaning as the digital universe has expanded. Jisc is currently doing some work in this area, led by Caroline Ingram.

I attended an interesting workshop recently to look into the challenges of developing researchers’ digital skills. Caroline led the workshop together with Rob Allen and David Hartland, both of whom have long experience in training in HE.

We considered what we meant by research support roles, what digital skills do researchers require, which organisations provide training, and what Jisc’s role should be (if any).

Roles

We agreed that it was not helpful to compile a long list of research support roles. These things change quickly and also vary across institutions. For instance, data scientist, data engineer, data librarian can all have distinct meanings, but they also overlap and may or may not each be necessary in a particular environment.

We discussed what we meant by ‘research support’.  Some workshop attendees considered that people in libraries and IT services who work with researchers  are also working in research and scholarship. Talking about ‘support’ has overtones of ‘Upstairs, Downstairs’ or Downton Abbey.  The term ‘Specialist Research Professional’ was preferred over ‘research support’.

Another helpful observation was that 95% of the time researchers have the skills to do what they need. They draw on their ‘foundational knowledge’ or will be able to figure out how to do something. The other 5% of the time they do need help, with hard problems.

Sometimes a Specialist Research Professional may become a member of a research team. This could be a part-time role as part of the research team or a full-time intra-team support role.

Skills

There are different ideas as to what ‘digital skills’ meant.I recall that 20 years ago we thought it meant knowing how to use Microsoft Office and being able to understand a Google search. Some at the workshop see digital skills as something close to information literacy, others see it as all about open access, others again see it mostly in terms of research data management. Most of the discussion focused on research data handling.

Skills training is not the whole story. Sometimes what’s needed is a more general awareness raising, for instance about copyright.

We agreed that the Jisc list of skills is a useful starting point.

Here’s a list of skills that we thought might be needed by researchers: personal information management, discovery (searching for data), appraisal of data, coding and data analysis, information governance, non-technical issues connected with digital skills (ethical, legal and social implications), reproducibility, productivity/workflow tools, managing collaborations. That last one is interesting – we don’t often talk about the skill required to manage digital participation and digital communities, though the AAAS are doing some work in this area.

Some more specific topics that I’d like to find training for include: data visualisation, mash ups (or are these out of fashion now?), text and data mining.

A more fine-grained approach needs to acknowledge that different individuals will have different depths of expertise in different areas. Also, depth of expertise depends on the context.  E.g. even someone knowledgeable about general research consent issues might look to the clinical trials team for more in-depth consent advice.

Who is doing what?

Current training providers include: Vitae,  Jisc, Digital Curation Centre,  ELIXIR,  EBI,  Emedlab, CODATA–DA.

Software Carpentry, Data Carpentry and various camps/unconferences were also enthusiastically endorsed by many at the workshop. MOOCs were mentioned, but it was noted that they were just about learning, not about creating a community.  That is where the carpentry events excel.

Other potential providers might be providers of specialist technical platforms – typically these churn out massive amounts of data and the staff of the platform will have skills in processing the data so would be well-placed to provide training. Software providers such as Matlab also provide training, and perhaps scientific instrument makers might.

Of course many individual institutions also provide training for their own researchers.

Jisc’s role

The following were suggested:

  • Help in accrediting courses
  • Collating a list of skills, that can be taught in 3-hr packages
  • Matching that list to existing providers
  • Identify good local sources of training and help them to go national
  • Adopt the carpentry approach of community-developed open training materials
  • Establish a ‘locum’ system for those who need to find short-term help with a project
  • Help to facilitate unconferences around digital skills
  • Extend the model of specialist data centres to specialist training centres

Conclusion

The results of the workshop will feed into a project report, some time later.

My conclusion was that it is very helpful to bring a disparate group of people together to discuss digital skills, and I think we learnt quite a bit from each other. Equally, it is very hard to say something about digital skills that applies across all areas of research without starting to sound bland. And, finally, it would be helpful if there were some agreed shared terminology about digital skills (but I suspect this is a forlorn hope).

Posted in Information skills, Research data, Uncategorized | Tagged , , | 1 Comment

Six questions about preprints

2017 is shaping up to be the year that preprints in biomedical sciences go mainstream.

At the beginning of the year MRC and Wellcome Trust both moved to accept preprints in grant applications and scientific reviews. Another major UK biomedical funder is likely to follow suit. In the USA the NIH has recently done the same. The ASAPbio initiative has issued its call for proposals for a central preprint service and it will be interesting to see what comes out of that. Meanwhile the Chan Zuckerberg Initiative has made a major commitment to support bioRxiv financially. Several new discipline-specific preprint servers have been launched. All this and there’s still nearly 8 months of 2017 to go!

The more I think about and learn about preprints, the more questions that seem to crop up. Maybe you can help me find some answers.

Last year’s SPOTON conference on the future of peer review featured a session on preprints and peer review which I chaired and introduced. The discussion was interesting although it felt inconclusive. The recently-issued report from the conference has a piece by me: “The history of peer review, and looking forward to preprints in biomedicine” – now available as a post on the BMC blog.  It’s nothing profound but I speculate slightly:

We may be moving to a world where some research is just published ‘as is’, and subject [only] to post-publication peer review, while other research goes through a more rigorous form of review including reproducibility checks. This [change] will be a gradual process, over a period of years. New tools … using artificial intelligence, will help by providing new ways to discover and filter the literature. A new set of research behaviours will emerge around reading, interpreting and responding to preprint literature. The corridors of science will resound with caveat lector and nullius in verba.

Whether / when to post?

Last week I gave a short talk about preprints to the postdocs at the MRC LMCB at UCL (here’s my slides). They were a lively group, already knowledgeable about preprints and open access but keen to learn more. I focused on a quick history of preprints and general stuff about them. The other speaker, Fiona Watts, provided real insight into the experience of a researcher who posts preprints, also drawing on her long experience as a senior editor on various top journals. She suggested that researchers should carefully consider which papers they preprint (is it a verb yet? I think it needs to be). Papers that have a heavy data component are appropriate for preprinting, but those with a high conceptual element might not be. I wonder if that is a generally-accepted notion, and whether there are other criteria that people use for deciding whether to post a preprint?

Scooping and citation

Related to this, the question of being scooped came up. Audience members shared their different experiences. Some believed it to be really uncommon, others reported having been scooped. In theory posting a preprint can give you priority over anything published subsequently. Some felt that would not be much comfort if someone else publishes a peer-reviewed paper before your preprint is published in a journal, particularly if the author of that paper fails to cite your preprint. Maybe peer reviewers need to be reminded that they should include preprints in their search for relevant literature?

There is some useful discussion of strategies about whether and when to post a preprint in a post on Niko Kriegeskorte’s blog.

One person at the LMCB meeting raised the question of preprint citeability. This had recently been discussed  on Twitter (it has been usefully summarised here on the Jabberwocky blog). Apparently NAR does not permit preprints to be cited.

Are there other journals that have a similar policy?

Clearly preprint citation is necessary if we are to give appropriate credit/priority to work posted as a preprint. But I can understand why some may be wary of allowing citation of work that’s not been peer-reviewed. I’m not altogether persuaded by the argument that because physicists do it it must be OK for biologists to do it. There are differences between disciplines and their cultures of knowledge-sharing.

I would like to know how much difference there is between a preprint version and a finally-published version of an article. (Indeed, I’d also like to know how much difference there is between a ‘green’ version of an article and the version of record, but that’s another story).

In the comments on the Jabberwocky blogpost mentioned above, Martin Jung suggests that the key issue is “how to support and improve scientific peer-review to better deal with grey literature and non-published sources” (such as preprints). We need to think harder about this.

Institutional servers

I’ve also been thinking about institutional policies around preprints. Judging by the ASAPbio page on University policies these are not yet widespread.

In talking to people about appropriate policies someone raised the idea of establishing an institutional preprint server. I can imagine a world where that would be a good idea, but I don’t think we are living in that world. I fear that an institutional server would be an irrelevant backwater, ignored by most in favour of the big disciplinary preprint servers. What do you think – is there much benefit to an institutional preprint server?

Two other useful things I read about preprints this week are a piece purporting to show increased citation rates for articles which started life as preprints, and a ten-point gospel highlighting the benefits and properties of preprints.

To recap my six questions:

  1. Is ‘preprint’ a verb yet?
  2. What criteria (if any) do you use for deciding whether to post a preprint?
  3. Should peer reviewers be told that they should include preprints in their search for relevant literature?
  4. Do you know of any journals which do not preprints to be cited as references?
  5. How much difference is there between a preprint version and a finally-published version of an article?
  6. Is there any benefit to an institutional preprint server?

Please give your answers in comments below, or on Twitter.

Posted in Open Access, Peer review, Preprints | 6 Comments

R2R – the Researcher to Reader conference

The R2R conference took place back in late February. It is an event more dominated than others (46%) by publishers – those on the business, strategy, and marketing side of the publishing industry. Smaller numbers come from libraries (15%), technology (12%) and consulting (10%). It attracts high-profile speakers (with interesting things to say) from publishing and elsewhere, and its workshops are interestingly different from those at other events. Ironically, researchers and readers were largely absent from the event. I attended it this year, for the second time. It’s a good event and I can recommend it if you’re interested in scholarly publishing.

Mark Allin, CEO of Wiley, gave the opening keynote talk and touched on some themes that would recur throughout the conference – the political situation and the piratical situation (if I may call it that).

Allinn was passionate about the need for scholarly publishers to support science and the drivers of science: debate, scepticism and liberal values. He referred to John Gibbons, a past presidential science advisor, and called for publishers to be partisan for science (this was an echo of Kent Anderson’s recent blogpost).

Allin said:

It’s tempting to keep our heads down. But we can’t afford to do that.   It’s a time for outreach, collaboration, partnership. A time to be bold.

Moving on to the researchers and readers, Allin put up a graph showing that Researchgate is distributing more articles than Sciencedirect, and SciHub is another substantial source. SciHub and Researchgate are easy to use and that’s what researchers want – ease of access. Allin stated that publishers can never stop piracy but they must instead compete with pirate sites by making legitimate access easier. Initiatives like the STM project RA21  aim to do this (more later on this).

Some other soundbites I noted from his talk were:

  • use the discontent of the customer to drive innovation
  • publishing needs to experiment more, do more, fail more
  • publishers need to commit to open source and open standards
  • publishers need to embrace the values of their customers – researchers and librarians; they should become more open, collaborative, experimental, disinterested
  • article submission is a pain point; preprints have a role to play in reducing the pain

He also recommended we read a recent book by Harvard Business School professor Bharat Anand – The content trap. The book reveals the need to make useful connections – e.g. between content and users, or users and other users. I first heard this articulated by Andy Powell about 10 years ago and it seemed persuasive then too.

Rick Anderson, from University of Utah Library (and a regular contributor to the Scholarly Kitchen blog) shone a light on a conundrum, looking at open access, library subscriptions and budgets.

He noted that many libraries are very short of funds and badly need to find cost savings. While it’s true that big deals deliver better value, they do not lower the cost. And big deals also make cancellation more difficult, reducing libraries’ flexibility.

New gold OA journals do provide a way to make content available without subscription, but they do not substitute for the existing journals, so libraries still must subscribe to those. Only green OA can help us to save subscription costs, but incomplete and delayed OA will not allow us to cancel. Complete and immediate OA would, but if that does come about it will affect the vitality of subscription journals. A different kind of serials crisis would ensue.

I guess the conundrum is a result of two different motivations for open access: a) to create a corpus of research literature that is open and available for reuse b) to save on the costs of scholarly communications. For a) we need gold open access, but for b) it seems that green is more effective.

I recall a senior publishing figure (I think he was from Blackwells) many years ago saying that he was happy to be quite permissive regarding allowing authors to self-archive their manuscripts as this was still a marginal activity. If a high proportion of manuscripts were self-archived then, he suggested, publishers would change their policy and stop allowing it.

Anderson did not mention offsetting deals – whereby publishers offer discounts to take into account the total spend on subscriptions and gold APCs. This kind of deal offers an alternative way to resolve the need for savings.

Michael Jubb and Richard Fisher sketched out where we are with books and ebooks. Academic libraries these days are preferring to purchase ebooks rather than print books. Bookshops on the other hand are selling far more printed books than ebooks.

The world of ebooks is very messy still and discoverability is a huge issue. There is a clear reader preference for print books and publishers are selling more print than ebooks – many publishers have 90% of their sales in print books.

I am not sure whether the reader preference for printed books is just a reflection of how poor ebook technology is, or whether there is a fundamental aspect of human reading behaviour that means ebooks will always be inferior. I like to believe that one day someone will come up with a way to improve the ebook user experience such that perceived problems with ebooks will disappear.

We then had a chorus of short talks about CHORUS – this is a publisher-led initiative to help (mostly) US universities and institutions to manage their open access responsibilities. I found little of great interest in this session. Syun Tutiya exchoed Rick Anderson in saying that the green road to OA does not work. He hoped that by 2020 Gold OA will account for a much bigger proportion of papers but also suggested that 100% gold is far, far in the future, or never!

Stephen Curry, Imperial College (who is a researcher, as well as a prominent open access advocate) gave an entertaining talk, including many recommendations for interesting books that we should read. Clearly he is not just a researcher but also a reader of distinction! He asserted that the motivations to undertake research are a mixture of wanting to understand the world, and to change it; to earn a living, to be remembered. The practicalities of becoming a successful researcher mean you should publish in a high impact factor journal. This creates a conflict – the ‘how’ can interfere with the ‘why’. Stephen is known as an advocate of open access and an enemy of the over-metrication of academia – he said that the conflict he described is eased if publishing is open and evaluation is broadly based. He outlined some practical suggestions for moving towards this ideal world, including widespread adoption of preprints.

His slides are available if you want to check out his book recommendations.

Laura Montgomery (Taylor and Francis) and Graham Walton (Loughborough Univ library)

It seems that User Experience (UX) research is all the rage in libraries these days.  The publisher Taylor & Francis sponsored some work in Loughborough University library to explore UX with postgraduate students. An initial literature review by Valerie Spezi  indicated that there wasn’t much research specifically on postgraduate students.

They mapped the UX of ten research students over 18 months, looking at how the students find and manage info, and seeking to identify opportunities to enhance the postgraduate research library in Loughborough.

The first step was to hold a workshop on how to get published, then they recruited participants from the workshop. Each was asked to keep a monthly diary over 8 months. They were set thematic questions each month. Some of the responses were predictable, some were surprising.

  • One student, asked whether they preferred print or ejournals responded “I didn’t know what a print journal was so I had to Google it to answer the question.”
  • When searching, most start at Google or Google Scholar. Many went to the library catalogue. Publisher websites were right at the bottom.
  • 43% said that the physical library not important for them at all. 39% said that the virtual library was very important.
  • Students took advice on information management tools from their colleagues, supervisors and lecturers. Graham Walton suggested that there is a need to train the supervisors.
  • Lack of access is a major frustration.

In questions, Michael Jubb pointed out that there has been quite a bit of similar work looking at researchers’ information and library use.

===============

The second day started with a panel discussion on copyright. This was informative and infuriating in equal measure.

Robert Harington (American Maths Society) stated the case for the publishers, basically in favour of the status quo. He touched on the difference between sciences and humanities, pointing out that scientific writing describes the intellectual stuff (ie patents are the real intellectual property), whereas in humanities the writing is the intellectual stuff. He emphasised the importance of the work publishers do in registering copyright, which is required in the USA if authors want to sue for copyright infringement. Registration is not required in the UK, so I was surprised to learn of this. (See Wikipedia for more on registering copyright).

Danny Kingsley (Cambridge Univ) gave an entertaining presentation making the case against how copyright currently works. She used a particularly egregious example from the music industry. But I didn’t think that such an extreme example made the case very well against copyright for research texts. She emphasised that copyright does not work and that publishers don’t need it – they just need a licence to publish the material.

Mark Thorley (NERC) made a strong case for changes in how copyright is applied to research outputs. He explained that research funders want research to be as widely and openly and readily available as possible, so that others can use and build on the research. He pointed out that research outputs should reach beyond research community.

Thorley said that the scholarly communication system must be made fit for a digital age. Barriers to access should be removed, so that any user can use research outputs for any purpose. We need to rethink how we apply copyright.

Summing up he said that copyright was not a benefit nor a detriment; it has not outlived its usefulness, but it needs amending to ensure fair exploitation in a digital world.

Alexander Ross is a lawyer specialising in publishing. He noted that copyright was introduced to deal with piracy of printed works, to protect the interests of creators.

Recently exceptions have been introduced to cater for research – for text and data mining and quotation. He said that it would be useful to have a fair dealing code for use in the academic sector.

He also noted that Creative Commons is a good attempt to standardise copyright for academic purposes.

In the general discussion session the speakers made further points:

  • Slicing and dicing rights is not helpful for research.
  • Copyright isn’t the barrier – its what the holder does with it.
  • Publishers requiring copyright transfer before they’ll publish research is the key problem.
  • Advice from publishers on why authors should assign copyright is confusing in the extreme (e.g. T&F).

Kent Anderson (CEO of Redlink and a regular contributor to the Scholarly Kitchen blog) spoke on his favourite subject, telling us about the marvellous work that publishers do.  His talk was based on a blog post of his that had been inspired by Clay Shirky saying that “publishing isn’t a job, it’s now a button”. I can see how that statement was a bit provocative and an oversimplification, though there’s a grain of truth in it. But Anderson pointed out that one key thing the publisher does it to shoulder the risks of publication, and another is to provide persistence, ensuring there are no broken links. I’m not persuaded that publishers are all that good at the persistence’ part.

Anderson then worked through a long list of everything else that publishers do, but I did not find many of them all that persuasive.  At one point he seemed to suggest that academic fields are defined by and only exist when there is a journal serving the field. Cart before the horse, much.

Tasha Mellins-Cohen (Highwire) spoke on access management. Some background to her talk is in a recent blogpost.

She noted that off-site access to subscribed content can be a problem, and that’s why people go to pirate sites. Publishers have an incentive to make off-site access work better therefore. Three publisher-led initiatives may help with this.

CASA: Campus activated subscriber access – more information about this will be coming in the next few months. It provides a way to record devices that have been authenticated by IP range. So when you use your device on the institutional wifi, and later use it off-campus the publisher website will still recognise you as an authenticated user.

SAMS Sigma – This is a roaming passport that does something similar to CASA but it can be a permanent link.

RA21 is a project of the STM publishers association. Mellins-Cohen said that while the principles of RA21 are excellent she worries that it might end up raising barriers to access rather than lowering them.

 

=================

Other summaries of the R2R conference have been written by Jennifer Harris and Danny Kingsley.

 

Posted in Copyright and IP, Journal publishing, Open Access | Comments Off on R2R – the Researcher to Reader conference

Weird things from publishers part 94

Many of the things that publishers do are perplexing, frustrating or reek of exploitation (it’s arguable that even the act of selling us subscriptions falls into the latter category) . I wrote earlier this year about a perplexing and frustrating example. Here’s some more.

A two-faced article

This article, in the Journal of the American Society for Mass Spectrometry can’t decide whether it is OA or not. Heck, it can’t even decide who it’s published by. Its bibliographic record in PubMed has two different links to the full text – one to Elsevier, marked as Open Access, and the other to SpringerLink.

The Springer link takes me to an abstract of the article showing it as an article published by Springer. It looks like something you need to subscribe to, or pay to access but a further link from that page in fact allows me to view the full text PDF. The article is marked as copyright American Society for Mass Spectrometry but I cannot see any statement to say that it is open access nor any Creative Commons licence. At the bottom of the PDF page it says “Published by Elsevier”.

The Elsevier link takes me to an HTML version of the whole article, on a page marked “Open Archive”. It also says it is “Under an Elsevier user license”.  I learnt that

“Articles published under an Elsevier user license are protected by copyright. Users may access, download, copy, translate, text and data mine (but may not redistribute, display or adapt) the articles for non-commercial purposes”

with various further conditions. The article is not available in PubMedCentral, which seems to me evidence that it is not genuinely open access, despite the label on the PubMed link to Elsevier.

I suspect the problem is that the article is in a journal owned by a learned society but published by a commercial publisher.  This is not always a good combination. Presumably the journal changed publisher and this created further confusion. I wonder why both versions are still extant?

Paying and paying and paying

Someone asked me to get them an article in a new Cell Press journal, Cell Systems. (Cell Press is an imprint of Elsevier). We purchased a copy of the article for the requester who then told us that they really wanted to get the supplementary information (SI) as well. We took a look at the publisher website, hoping the SI might be free. But it wasn’t free – it was going to cost us $22 to download it. Or rather, it would cost $22 to download one file.  This article has 9 separate SI files and each of them (as far as I can tell) will cost $22. My investigative instinct did not extend to trying to download all 9 SI files to check the full price really would be 9 times $22 = $198.

This single article seems an excellent argument against putting SI on a publisher website.

There does seem to be an option (Read-it-Now) to  pay a single price for the article and its SI all together, but this is a short-term ‘rental’ option and is not suitable for someone who wants to save and reuse the SI data.

How much is your APC?

The EMBO Journal is a moderately well-thought of journal, but I wouldn’t put it at the top of the prestige pile. I was therefore surprised to learn that if you want to publish an article there with immediate open access it will cost you $5,200. That puts it in the same price league as Cell and Nature Communications, both of which have an APC of $5,000. At least, they did last time I looked. And that’s the thing – I’m not in the habit of checking journal websites regularly to see what their APCs are this week. When I looked back to see what we had to pay last time (in early 2016) someone wanted to publish an article in EMBO Journal I found that it was just $3,900 – still a bit above the average APC for a journal published by a commercial publisher (EMBO J is published by Wiley) but quite a bit less than their current price.

I looked into it further and found  that there had been a large hike in all Wiley APC prices at the beginning of 2016 when the publisher changed its pricing model for open access. Up to the end of 2015 they charged an open access fee plus page charges of $95 per page. From 1 Jan 2016 they stopped charging the page charge and raised the APC to make the lost fees.

Part of me wants to applaud this transparency – having page charges on top of APCs seems like triple-dipping and in these days of online journals page charges are an expensive anachronism. Not to say a rip-off. But loading the extra cost onto the APC is still a rip-off.

I poked around the wayback machine a little to check on some of the facts about past APCs. Wiley’s spreadsheet listing all their APCs seems to be updated quite frequently, so I guess there are quite a few increases. I think it would be interesting to do this more systematically to track changes to APCs. (But not quite interesting enough that I felt moved to do it myself).

Freedom APCs

I found that I had a new Twitter follower – the Director of Cogent OA.  I’d not encountered this publisher before but quickly found that it is owned by Taylor and Francis (one of the big four) and it has thus far published 200 OA articles in 15 broad journals with a total of 122 sections. I’m not a fan of setting up large numbers of empty journals/sections in the hope that they will attract articles.

Cogent charges an APC of $1250 but this is marked as a ‘recommended’ price. Cogent are pioneering the idea of “Freedom APCs”.

As the first multi-disciplinary publisher to introduce Freedom APCs – a “pay what you want“ model – across the entire portfolio, our authors can choose how much they contribute towards open-access publishing based on their funding and financial circumstances.

I wonder how free the author really is to decide what they want to pay? Once again, it would interesting to do some experiments to test this out.

Interesting times

The landscape of publishing and open access grows ever more complex and confusing.  TO be absolutely on top of everything is really a full-time job, but few of us (certainly not me) are able to spend all our time on mastering everything that is changing.

Posted in Journal publishing, Open Access | 1 Comment

Open Research London – Oct 2016 meeting

I was very relieved that the Open Research London (ORL) meeting on 19 October 2016 went well. Jon Tennant, Ross Mounce and Liz Ing-Simmons established ORL in Jan 2015 but it faded away after a couple of meetings.  I’d been thinking for some time that I should start up the group again, but was a bit wary of the work involved.  The group’s founders were happy for me to have a go at re-establishing it and I found a willing co-organiser here at the Crick in Martin Jones.

About 55 people turned up to the spectacular new Crick building to hear talks on two publishing innovations and a talk / interactive workshop on open science workflows.

Preprints – bioRxiv

John Inglis is the co-founder of bioRxiv, the preprint service for the life sciences based at Cold Spring Harbor Laboratory (CSHL).  He is also the founding Executive Director and Publisher of Cold Spring Harbor Laboratory Press in New York, a not-for-profit publisher of journals, books, and online media in molecular and cellular biology. He spoke about bioRxiv at three years old.

First a definition. John defined a preprint as “a complete but unpublished manuscript yet to be certified by peer review”. The definition is all about what it’s not (i.e. peer-reviewed) but the key to preprints is all about the speed of dissemination of research results.

Preprints are currently distributed under two models. In the first, a manuscript is submitted to a journal which makes it available as a reprint and then puts it into a peer review process.  The journal publishes those preprints that pass peer review.  The business model is that of the journal – i.e. it is funded by APCs (article processing charges). An example of this model is F1000Research. In the second model manuscripts are submitted to a service dedicated to preprints. There is no fee and no peer review as part of the service. The service is supported by institutions and foundations. An example of this model is arXiv, supported by Cornell University and others. It gets 100,000 submissions p.a.

John explained that CSHL is committed to science communication so bioRxiv was a natural extension to its activities.

Modelled on the arXiv, it is a dedicated, publisher-neutral service. The time seemed right to launch it in 2013 as there was a new enthusiasm for openness, a greater acceptance of digital resources and practices, and increased posting to the quantitative biology section on the arXiv showed that some biologists at least were ready for preprints.

In building bioRxiv the founders aimed for a simple, reliable submission system, and wanted to ensure that authors would have total control over their content (using a variety of licenses).

John Inglis is always careful to talk about bioRxiv  in terms of ‘posting’ and ‘distribution’ rather than publishing. There is no peer review component of the service.

Manuscripts submitted to biorXiv are allocated to one of 26 subject categories then checked for scope and format, and subjected to a plagiarism check. Some basic checks are made to confirm that the submission is scientific in nature, is a report of research (rather than just a hypothesis) and whether there are human health implications. In the latter case some further screening is needed.

If all checks are passed then a few hours later it will be posted on the website. The author can post revisions at any time. biorXiv gets information out there very quickly.

In its first three years bioRxiv has posted 6200 manuscripts.  Disappointingly not many of them are confirmatory or contradictory results.  25% of them have been revised at least once. There are more submissions in biology than in health.

Submissions come from 2900 institutions, across 82 countries. Submissions are increasing, currently standing at 450 per month.

Authors seem to like it. Transmission of research results is very rapid. They get useful feedback and their preprints are read. 11% of the preprints receive comments and there have been many blogposts and tweets about bioRxiv preprints – 93k tweets in the past 12 months.

Institutions and funders are starting to accept preprints as evidence of productivity. Both NIH and Wellcome are considering their policies on accepting preprints in grant applications. Rockefeller University has accepted preprints as part of a CV. EMBO accepts preprints on CVs for fellowship applications. Many journals will consider preprints for publication as papers, including Nature, Science and Cell.  PLOS Genetics has appointed editors specifically to search bioRxiv for potential manuscripts to publish in the journal.  CSHL has undertaken a good deal of advocacy with learned societies and publishers. Several publishers now offer a “submit your manuscript to bioRxiv” button as part of their submission process.

It’s time to start sending your manuscripts to bioRxiv!

Wellcome Open Research

Robert Kiley is Head of Digital Services at the Wellcome Library and is currently acting as Wellcome’s Open Research Development Lead, responsible for developing a new open research strategy for the Wellcome Trust. Over the past decade Robert has played a leading role in the implementation of the Trust’s open access policy and overseeing the development of the Europe PubMed Central repository. He is also the Trust’s point of contact for eLife. He spoke about the Trust’s latest publishing initiative: Wellcome Open Research.

Just as with bioRxiv, the purpose of WOR is to make research communication faster, and more transparent.

WOR is fast, inclusive (you can publish everything – not just standard research narratives), open, reproducible (data is published alongside the article), transparent (it uses open peer review), and easy (the costs are met directly by Wellcome). The only drawback is that it is only available for outputs from Wellcome-funded researchers.

In early submissions, they have seen a range of publication types and a range of researchers from senior to junior. Submissions have come from a range of institutions.

The open peer review process allows for one of three decisions from each reviewer: approved; approved with reservations; not approved. If a preprint gets two ‘approved’ decisions then it is indexed in PubMed and deposited in ePMC. The is the same process used by F1000Research, which provides publishing services for WOR.

Wellcome hope to attract a range of outputs from a range of researchers. They hope that other funders will in due course emulate their initiative.

Open science workflows

orl-oct16-1

Bianca Kramer and Jeroen Bosman are subject specialist librarians from Utrecht University who have considerable expertise in scholarly communication and research workflow tools. Together they led the global survey in Innovations in Scholarly Communication. Their talk, Open Science workflows: Tools and practices to make your research more open, reproducible and efficient promised us a glimpse of the future.

orl-oct16-2

One of the slides from Bianca Kramer and Jeroen Bosman’s talk

They started by showing some diagrams of research workflows – nice well-behaved cycles with boxes and arrows. But life is not so simple. To make the workflow more realistic Jeroen added cycles within cycles, put in some dead ends and some repeats. By the end it looked more like a game of snakes and ladders.

We then learnt a little about the survey of tools that Bianca and Jeroen had made, and how they had categorised different tools as either:

  • Traditional (same as print era)
  • Modern (internet era)
  • Innovative (social media, collaborative aspect)
  • Experimental (new tools, developing tools, startups)

Lest we become too focused on technology and tools though they emphasised that research workflows are less about tools and more about work practices and people.

Bianca and Jeroen are nothing if not practical and they are happiest when considering reality instead of abstractions.  The remainder of their presentation was in the form of a workshop, an endeavour to define a set of work practices that would constitute an open science workflow, and tools that would support it.

The scale of their ambition for the workshop, and the amount of preparation they had done, astounded me. There were three or stages to this exercise.

orl-oct16-3

The final result of Bianca Kramer and Jeroen Bosman’s workshop

Each seat had a small paper button with the name of a research tool on it.  We were first invited to review the tool on our own seat, consider its usage and application in open research, and then to discuss this with our neighbour. Around the periphery of the room a number of research practices had been arranged, grouped according to the different stages of the research cycle. We were next asked to affix our paper button to an appropriate research practice. Finally, we were each asked to move one of the research practices that we considered to be part of an open science workflow and place it in a new area at the back of the room, together with any tool we thought would help with the practice.  This back wall filled up with open research practices and tools.

As a participant it was quite challenging to grasp all of this, and a bit mind-boggling to assimilate everything that was on the back wall. I suspect that Bianca and Jeroen, who live and breathe this stuff, had a clearer picture than anyone else n the room. It certainly stimulated us to focus on the role that tools can play in open research practices, and on what an open research practice is. I liked the way too that the whole process of the workshop was open and transparent.  Everyone in the room had a sense of the task we’d been set and of how the solution to it was emerging – we could see it on the walls. This really was a great model for open research.

Bianca and Jeroen’s mixture of openness and outreach is a great combination.

You can read their own account of the workshop on their blog.

And that was Open Research London, Oct 2016. We’re planning the next one already – expect it to be Jan or Feb 2017. Hope to see you there! Find out more about Open Research London on the website or follow it on Twitter.

Posted in Open Access | Comments Off on Open Research London – Oct 2016 meeting

Rapid or vapid?

Someone recently asked me what I thought about the open access journal Molecular Metabolism. I had just delivered a short talk to a group of researchers as a reminder about our open access policy and what my team could do to help them make sure their research was published open access.

Well, I didn’t think anything about Molecular Metabolism as I had never come across it before. In case you wondered, it’s been going since 2012 and has been published monthly for the past 2 years and more, so it seems to be reasonably well-established. Over that time it’s published about 350 articles, some of them having modest citation impact but nothing earth-shattering.  It’s not exciting in any way, but it seems an entirely worthy publishing venue. It is supported by the German Center for Diabetes Research (DZD) and the Helmholtz Diabetes Centre so it has roots in the diabetes and metabolism research community. And it’s published by Elsevier as a fully OA journal.

On the journal’s website it describes itself as

a platform reporting breakthroughs from all stages of the discovery and development of novel and improved personalized medicines for obesity, diabetes and associated diseases.

As well as being an OA journal it has adopted a rapid publication model.  Its peer review instructions require reviews to be delivered in 72 hours. Reviewers are asked only to accept, reject, or suggest minor revisions.

My enquirer outlined this rapid process and asked what I thought. It seemed a good model to me. It’s not a high grade journal, so long rounds of manuscript revision would be a waste of everyone’s time. The option to reject papers is still there.

My enquirer had been invited to review an article submitted to the journal. His response was that the journal must be a scam. He viewed their rapid review policy as an invitation to reviewers to accept substandard papers. My sense was that he was also implying a Bohannonian subtext of “and all OA journals must be rubbish, innit?”.  I suspect he has the same view about PLOS ONE.

This little exchange gave me more information about the person asking the question than about the journal he enquired of. It seems a shame that in some quarters OA, and the idea of the ‘sound science’ peer review is still regarded with such suspicion.

Posted in Journal publishing, Open Access, Peer review | Comments Off on Rapid or vapid?

Librarygeddon

The Library, the collection

When it’s done right it is a wonderful thing. The collection dedicated to meeting a specific need: carefully selected, sensibly arranged, appropriately indexed, comprehensive in its coverage and range of formats. It is precisely calibrated to meet a need. On its own it is a collection – worthy of celebration. But put the collection together with a community of users and a team of knowledgeable library staff and it turns into a Library – a boon to civilisation and scholarship.

To my way of thinking it is the community of users that comes first.  Perhaps this is librarianship’s chicken and egg situation – “Which comes first, the reader or the book?”.  Ranganathan answers the question by saying, in effect, “both”. His famous five laws of library science include these two:

Every reader their book
Every book its reader

Neil Gaiman has written thoughtfully about the value of public libraries as “cultural seed corn” and he hails the way that libraries encourage knowledge discovery:

there’s nothing quite like the glorious serendipity of finding a book you didn’t know you wanted to read. Anybody online can find a book they know they want to read, but it’s so much harder to find a book you didn’t know you want to read.

Public libraries are under great threat, even though they have many vocal defenders. Other kinds of libraries are equally under threat but don’t attract the same passionate advocates.

My career has been in special libraries, specifically what are sometimes called workplace libraries, where the raison d’etre for the library is the needs of a group of people in a workplace. In special libraries it is definitely the reader who comes before the collection.

Information needs are expressed by users, and captured by the librarian, who must take actions to meet those needs. The user needs define the shape and contents of the collection, and its organisation. In an engineering library you may find a small section of books on ‘medicine’, probably all lumped together under one classification mark. In a medical library those books would need to be classified in far more detail (dividing them into finer shades of topics), but any books on ‘engineering’ would be lumped into a single section. Thus, indexing and classification in the library must be determined by the needs of the users. Classification is relative, not absolute. A creative librarian will also develop services on top of the collection to meet particular needs of the community of users.

“I have always imagined that Paradise will be a kind of library.” – Jorge Luis Borges

The library when NIMR was at Hampstead, ca 1930

Growth

The library’s collection policy is an expression of its approach to managing the collection. It defines what to acquire, what to keep, and what to discard. It may aim for homeostasis (a fixed size) or it may attempt to keep hold of everything it has ever acquired (continual expansion).  Often libraries will make use of secondary storage space, where less-frequently required items are sent. (A bit like sending your old possessions up to the loft, or into the garage). A mature library collection, with a long historical overhang in secondary storage, can be seen as a record of the organisation’s interests and development. It reflects some of the parent organisation’s history.

A library devoted to a single subject for a long period of time, e.g. a learned society library like the Linnean Society Library, becomes an unrivalled repository of knowledge about the subject.

“To build up a library is to create a life. It’s never just a random collection of books” – Carlos María Domínguez

Over the years the library’s users and their needs may change, and this will impact on the collection and services too. Of course library staff may also change, and while they will strive for continuity there will inevitably be changes in emphasis as new staff replace old staff and make their mark with their own particular quirks.

Another of Ranagathan’s laws is:

The library is a growing organism

The library cannot stand still. Space limitations may mean that it cannot grow indefinitely large, but it still needs to live – to develop as user needs change and publishing trends change.  The library will discard (or relegate) no longer needed items as well as ingest new items. It must also exploit new developments in technology and the information industry, and adapt to changes in the broader environment.

“The only thing that you absolutely have to know, is the location of the library.” – Albert Einstein

The NIMR library at Mill Hill, 2011

Radical change

There should be a tight link between the collection, its users and the workplace, so that they change in harmony. But if an organisation undergoes a more radical change then the collection can come under threat. When organisations close, restructure, split, or suffer severe financial troubles, then there will be dire consequences for the library service and collection. This is the way of the world but, especially for longer-established collections, it can be a cause for some regret. Simon Barron has pointed out that sometimes there is a conflict between those who worship books regardless of use and the drive for efficient collection management. I saw a project advertised a couple of years back about the “non-economic benefits of culture“. I wonder whether it included a strand about the cultural value of libraries (of all kinds)?

A few years back I was the membership secretary for CHILL (a grouping of independent health-related libraries). These all count as special libraries, many of them relatively small. It was salutary to count up each year how many members had fallen by the wayside in the preceding 12 months. One member was a charity called Drugscope which had a fantastic information service on addiction, run by a dynamic information professional. But one year some of the charity’s key funding grants were not renewed, leaving a funding hole. There was no money to fund the information service. It just closed. Apparently it was not the only one. An editorial in Addiction in 2012 said: “Special libraries in the addiction field have been downsized or closed at an alarming rate during the past decade”.

The Royal College of Midwives also had an enviable collection and information services run by knowledgeable staff. Financial pressures led to closure. Happily the Library of the Royal College of Obstetrics and Gynaecology stepped in to rescue the collection in this case, or it would have been lost. Sadly the knowledgeable staff were lost. When the Royal Pharmaceutical Society of Great Britain was split, into a professional society and a regulatory body, this also occasioned a massive change for the library. Retaining a large historical collection was no longer economically feasible and a “lean and mean” approach was adopted instead. Large numbers of older items were disposed of.

Anecdotally I have heard that many Government department libraries have closed or drastically downsized in recent years, due to restructuring, merging or closure of departments.

Of course it’s important that libraries provide useful services in an efficient manner, but the inevitable consequence of rapid organisational change is the loss of many items of real historical interest.

This kind of loss is a global phenomenon. In 2014 it was reported that:

Scientists in Canada are up in arms over the recent closure of more than a dozen federal science libraries … Scientists fear that valuable archival information is being lost. Of most concern is the so-called grey literature — information such as conference proceedings and internal departmental reports.

We have a worldwide librarygeddon – loss or destruction of many specialist collections. Does it matter? Maybe, but probably not enough that we can do anything to change it.

Empty shelves in the NIMR Library gallery

They’re all empty now

NIMR Library heritage

My interest in this topic began in 2003, when I first learnt of plans for my Institute to be moved. The plans changed over time, but it has been blindingly evident to me right from 2003 that if the Institute moved from Mill Hill it would spell the end for our Library collections. I have got used to the idea and I can think of no valid argument why things should be any different, but I still regret that things have to be this way.

The NIMR library began in 1920 when the Institute moved into its Hampstead home. The aim of the library was to support scientific medical research at the Institute and to be a resource for the nation’s medical researchers. Work to assemble the collection had begun prior to 1920, collecting materials from around the world. Many items were received as donations, or through exchange programs. The MRC published Medical Science: abstracts and reviews and its highly-regarded Special Report Series, so it had valuable publications to offer in return for free journals and reports from other organisations. In those colonial days, reports (grey literature) were received from research organisations across the British Empire and the rest of the world.  We even used to receive The Lancet free of charge (though this changed when Elsevier bought the journal). As well as books, journals and reports, the Library has an extensive collection of pamphlets and reprints. The collection represents a unique period of British medical research, when Hampstead and later Mill Hill were at its centre.

The reprints were largely collected by individual scientists, I believe, and periodically deposited in the Library. They were all carefully catalogued and subject indexed – we have a vast rank of filing drawers filled with index cards. I think of the reprint collection as the intellectual encrustation of the Institute’s research, from the 1920s to the 1960s. Reprints have become unfashionable and few libraries are interested in taking large collections of reprints. But I think much of the value of the reprint collection is in the catalogue rather than the reprints themselves. It is a unique historical document of 20th century biomedical science.

It’s a strange thing, but when scientific records/publications become old they often have less value to science, but more value to the humanities, especially history. Just by keeping hold of our science literature collections as they aged so we acquired a history collection. The trouble was, we were not funded for historical research but for scientific research. Therein lies the problem I faced. Looking back, I can see many ways that I could have handled it better but I did as best as I could at the time.

It’s a big project

The simplest thing to do would have been to just throw everything away, and for a while I was afraid that was exactly what I would have to do. Happily, I gained approval (and resources) from on high to undertake a few projects to help ensure that those parts of the collection with most value could be dispersed responsibly.

There were about 3km of shelving filled with printed materials that needed to be disposed of. Printed journal volumes accounted for much of the shelf space.  The pace of digitisation of journals is such that it is very hard to find homes for printed journals. We did find homes in other libraries for a small quantity of journals, but most of the journals were sent for recycling.

Preliminary enquiries made it very clear that few libraries had an interest in more than a small proportion of the other items. That is not surprising – libraries can only collect what matches their strategic needs, and their collection policies (vide supra). The lists that we prepared of the most desirable groups of items – the pamphlets, reports and older books – have helped us to find homes for quite a few items. The 5,000 pamphlets in particular have gone as a single block to a major library. I tweeted pictures of quite a few of the books that have been transferred to other libraries, under the hashtag #nimrlibrarybyebye. A few hundred have found homes in libraries, mostly in London but some further afield.

A few hundred books have been retained to form a historical collection in the new institute – more of this in a later post. Some others have been retained by some of the labs for their use. And a few hundred have been transferred to members of staff as mementoes of NIMR. But several thousand items will go to the secondhand trade – I hope they will eventually make their way to collectors’ shelves.

I should emphasise that most items in the NIMR Library collection are not unique – probably not even rare by the standards of rare book collectors. And not really old either. Most of the collection was 20th century.

It is not so much the items themselves that I am mourning here, but the collection of all the items.  It’s the whole collection that is greater than the sum of its parts, the connections between all the different documents and the context of the Institute . This notion (value of the collection versus value of individual items) is a bit intangible, and unquantifiable; perhaps not a thing at all. That’s why when it is all gone (quite soon now), probably no-one will notice, or remember that it was there, apart from me.  I will remember.

You can look right through the empty shelves in our store

Empty stacks in the library store

There’s about 3km of empty shelves now

Posted in Books, Collections, Libraries and librarians | 2 Comments