Digital skills – how do we …

The concept of digital skills is a bit slippery.  The term has changed its meaning as the digital universe has expanded. Jisc is currently doing some work in this area, led by Caroline Ingram.

I attended an interesting workshop recently to look into the challenges of developing researchers’ digital skills. Caroline led the workshop together with Rob Allen and David Hartland, both of whom have long experience in training in HE.

We considered what we meant by research support roles, what digital skills do researchers require, which organisations provide training, and what Jisc’s role should be (if any).

Roles

We agreed that it was not helpful to compile a long list of research support roles. These things change quickly and also vary across institutions. For instance, data scientist, data engineer, data librarian can all have distinct meanings, but they also overlap and may or may not each be necessary in a particular environment.

We discussed what we meant by ‘research support’.  Some workshop attendees considered that people in libraries and IT services who work with researchers  are also working in research and scholarship. Talking about ‘support’ has overtones of ‘Upstairs, Downstairs’ or Downton Abbey.  The term ‘Specialist Research Professional’ was preferred over ‘research support’.

Another helpful observation was that 95% of the time researchers have the skills to do what they need. They draw on their ‘foundational knowledge’ or will be able to figure out how to do something. The other 5% of the time they do need help, with hard problems.

Sometimes a Specialist Research Professional may become a member of a research team. This could be a part-time role as part of the research team or a full-time intra-team support role.

Skills

There are different ideas as to what ‘digital skills’ meant.I recall that 20 years ago we thought it meant knowing how to use Microsoft Office and being able to understand a Google search. Some at the workshop see digital skills as something close to information literacy, others see it as all about open access, others again see it mostly in terms of research data management. Most of the discussion focused on research data handling.

Skills training is not the whole story. Sometimes what’s needed is a more general awareness raising, for instance about copyright.

We agreed that the Jisc list of skills is a useful starting point.

Here’s a list of skills that we thought might be needed by researchers: personal information management, discovery (searching for data), appraisal of data, coding and data analysis, information governance, non-technical issues connected with digital skills (ethical, legal and social implications), reproducibility, productivity/workflow tools, managing collaborations. That last one is interesting – we don’t often talk about the skill required to manage digital participation and digital communities, though the AAAS are doing some work in this area.

Some more specific topics that I’d like to find training for include: data visualisation, mash ups (or are these out of fashion now?), text and data mining.

A more fine-grained approach needs to acknowledge that different individuals will have different depths of expertise in different areas. Also, depth of expertise depends on the context.  E.g. even someone knowledgeable about general research consent issues might look to the clinical trials team for more in-depth consent advice.

Who is doing what?

Current training providers include: Vitae,  Jisc, Digital Curation Centre,  ELIXIR,  EBI,  Emedlab, CODATA–DA.

Software Carpentry, Data Carpentry and various camps/unconferences were also enthusiastically endorsed by many at the workshop. MOOCs were mentioned, but it was noted that they were just about learning, not about creating a community.  That is where the carpentry events excel.

Other potential providers might be providers of specialist technical platforms – typically these churn out massive amounts of data and the staff of the platform will have skills in processing the data so would be well-placed to provide training. Software providers such as Matlab also provide training, and perhaps scientific instrument makers might.

Of course many individual institutions also provide training for their own researchers.

Jisc’s role

The following were suggested:

  • Help in accrediting courses
  • Collating a list of skills, that can be taught in 3-hr packages
  • Matching that list to existing providers
  • Identify good local sources of training and help them to go national
  • Adopt the carpentry approach of community-developed open training materials
  • Establish a ‘locum’ system for those who need to find short-term help with a project
  • Help to facilitate unconferences around digital skills
  • Extend the model of specialist data centres to specialist training centres

Conclusion

The results of the workshop will feed into a project report, some time later.

My conclusion was that it is very helpful to bring a disparate group of people together to discuss digital skills, and I think we learnt quite a bit from each other. Equally, it is very hard to say something about digital skills that applies across all areas of research without starting to sound bland. And, finally, it would be helpful if there were some agreed shared terminology about digital skills (but I suspect this is a forlorn hope).

Posted in Information skills, Research data, Uncategorized | Tagged , , | Leave a comment

Six questions about preprints

2017 is shaping up to be the year that preprints in biomedical sciences go mainstream.

At the beginning of the year MRC and Wellcome Trust both moved to accept preprints in grant applications and scientific reviews. Another major UK biomedical funder is likely to follow suit. In the USA the NIH has recently done the same. The ASAPbio initiative has issued its call for proposals for a central preprint service and it will be interesting to see what comes out of that. Meanwhile the Chan Zuckerberg Initiative has made a major commitment to support bioRxiv financially. Several new discipline-specific preprint servers have been launched. All this and there’s still nearly 8 months of 2017 to go!

The more I think about and learn about preprints, the more questions that seem to crop up. Maybe you can help me find some answers.

Last year’s SPOTON conference on the future of peer review featured a session on preprints and peer review which I chaired and introduced. The discussion was interesting although it felt inconclusive. The recently-issued report from the conference has a piece by me: “The history of peer review, and looking forward to preprints in biomedicine” – now available as a post on the BMC blog.  It’s nothing profound but I speculate slightly:

We may be moving to a world where some research is just published ‘as is’, and subject [only] to post-publication peer review, while other research goes through a more rigorous form of review including reproducibility checks. This [change] will be a gradual process, over a period of years. New tools … using artificial intelligence, will help by providing new ways to discover and filter the literature. A new set of research behaviours will emerge around reading, interpreting and responding to preprint literature. The corridors of science will resound with caveat lector and nullius in verba.

Whether / when to post?

Last week I gave a short talk about preprints to the postdocs at the MRC LMCB at UCL (here’s my slides). They were a lively group, already knowledgeable about preprints and open access but keen to learn more. I focused on a quick history of preprints and general stuff about them. The other speaker, Fiona Watts, provided real insight into the experience of a researcher who posts preprints, also drawing on her long experience as a senior editor on various top journals. She suggested that researchers should carefully consider which papers they preprint (is it a verb yet? I think it needs to be). Papers that have a heavy data component are appropriate for preprinting, but those with a high conceptual element might not be. I wonder if that is a generally-accepted notion, and whether there are other criteria that people use for deciding whether to post a preprint?

Scooping and citation

Related to this, the question of being scooped came up. Audience members shared their different experiences. Some believed it to be really uncommon, others reported having been scooped. In theory posting a preprint can give you priority over anything published subsequently. Some felt that would not be much comfort if someone else publishes a peer-reviewed paper before your preprint is published in a journal, particularly if the author of that paper fails to cite your preprint. Maybe peer reviewers need to be reminded that they should include preprints in their search for relevant literature?

There is some useful discussion of strategies about whether and when to post a preprint in a post on Niko Kriegeskorte’s blog.

One person at the LMCB meeting raised the question of preprint citeability. This had recently been discussed  on Twitter (it has been usefully summarised here on the Jabberwocky blog). Apparently NAR does not permit preprints to be cited.

Are there other journals that have a similar policy?

Clearly preprint citation is necessary if we are to give appropriate credit/priority to work posted as a preprint. But I can understand why some may be wary of allowing citation of work that’s not been peer-reviewed. I’m not altogether persuaded by the argument that because physicists do it it must be OK for biologists to do it. There are differences between disciplines and their cultures of knowledge-sharing.

I would like to know how much difference there is between a preprint version and a finally-published version of an article. (Indeed, I’d also like to know how much difference there is between a ‘green’ version of an article and the version of record, but that’s another story).

In the comments on the Jabberwocky blogpost mentioned above, Martin Jung suggests that the key issue is “how to support and improve scientific peer-review to better deal with grey literature and non-published sources” (such as preprints). We need to think harder about this.

Institutional servers

I’ve also been thinking about institutional policies around preprints. Judging by the ASAPbio page on University policies these are not yet widespread.

In talking to people about appropriate policies someone raised the idea of establishing an institutional preprint server. I can imagine a world where that would be a good idea, but I don’t think we are living in that world. I fear that an institutional server would be an irrelevant backwater, ignored by most in favour of the big disciplinary preprint servers. What do you think – is there much benefit to an institutional preprint server?

Two other useful things I read about preprints this week are a piece purporting to show increased citation rates for articles which started life as preprints, and a ten-point gospel highlighting the benefits and properties of preprints.

To recap my six questions:

  1. Is ‘preprint’ a verb yet?
  2. What criteria (if any) do you use for deciding whether to post a preprint?
  3. Should peer reviewers be told that they should include preprints in their search for relevant literature?
  4. Do you know of any journals which do not preprints to be cited as references?
  5. How much difference is there between a preprint version and a finally-published version of an article?
  6. Is there any benefit to an institutional preprint server?

Please give your answers in comments below, or on Twitter.

Posted in Open Access, Peer review, Preprints | 6 Comments

R2R – the Researcher to Reader conference

The R2R conference took place back in late February. It is an event more dominated than others (46%) by publishers – those on the business, strategy, and marketing side of the publishing industry. Smaller numbers come from libraries (15%), technology (12%) and consulting (10%). It attracts high-profile speakers (with interesting things to say) from publishing and elsewhere, and its workshops are interestingly different from those at other events. Ironically, researchers and readers were largely absent from the event. I attended it this year, for the second time. It’s a good event and I can recommend it if you’re interested in scholarly publishing.

Mark Allin, CEO of Wiley, gave the opening keynote talk and touched on some themes that would recur throughout the conference – the political situation and the piratical situation (if I may call it that).

Allinn was passionate about the need for scholarly publishers to support science and the drivers of science: debate, scepticism and liberal values. He referred to John Gibbons, a past presidential science advisor, and called for publishers to be partisan for science (this was an echo of Kent Anderson’s recent blogpost).

Allin said:

It’s tempting to keep our heads down. But we can’t afford to do that.   It’s a time for outreach, collaboration, partnership. A time to be bold.

Moving on to the researchers and readers, Allin put up a graph showing that Researchgate is distributing more articles than Sciencedirect, and SciHub is another substantial source. SciHub and Researchgate are easy to use and that’s what researchers want – ease of access. Allin stated that publishers can never stop piracy but they must instead compete with pirate sites by making legitimate access easier. Initiatives like the STM project RA21  aim to do this (more later on this).

Some other soundbites I noted from his talk were:

  • use the discontent of the customer to drive innovation
  • publishing needs to experiment more, do more, fail more
  • publishers need to commit to open source and open standards
  • publishers need to embrace the values of their customers – researchers and librarians; they should become more open, collaborative, experimental, disinterested
  • article submission is a pain point; preprints have a role to play in reducing the pain

He also recommended we read a recent book by Harvard Business School professor Bharat Anand – The content trap. The book reveals the need to make useful connections – e.g. between content and users, or users and other users. I first heard this articulated by Andy Powell about 10 years ago and it seemed persuasive then too.

Rick Anderson, from University of Utah Library (and a regular contributor to the Scholarly Kitchen blog) shone a light on a conundrum, looking at open access, library subscriptions and budgets.

He noted that many libraries are very short of funds and badly need to find cost savings. While it’s true that big deals deliver better value, they do not lower the cost. And big deals also make cancellation more difficult, reducing libraries’ flexibility.

New gold OA journals do provide a way to make content available without subscription, but they do not substitute for the existing journals, so libraries still must subscribe to those. Only green OA can help us to save subscription costs, but incomplete and delayed OA will not allow us to cancel. Complete and immediate OA would, but if that does come about it will affect the vitality of subscription journals. A different kind of serials crisis would ensue.

I guess the conundrum is a result of two different motivations for open access: a) to create a corpus of research literature that is open and available for reuse b) to save on the costs of scholarly communications. For a) we need gold open access, but for b) it seems that green is more effective.

I recall a senior publishing figure (I think he was from Blackwells) many years ago saying that he was happy to be quite permissive regarding allowing authors to self-archive their manuscripts as this was still a marginal activity. If a high proportion of manuscripts were self-archived then, he suggested, publishers would change their policy and stop allowing it.

Anderson did not mention offsetting deals – whereby publishers offer discounts to take into account the total spend on subscriptions and gold APCs. This kind of deal offers an alternative way to resolve the need for savings.

Michael Jubb and Richard Fisher sketched out where we are with books and ebooks. Academic libraries these days are preferring to purchase ebooks rather than print books. Bookshops on the other hand are selling far more printed books than ebooks.

The world of ebooks is very messy still and discoverability is a huge issue. There is a clear reader preference for print books and publishers are selling more print than ebooks – many publishers have 90% of their sales in print books.

I am not sure whether the reader preference for printed books is just a reflection of how poor ebook technology is, or whether there is a fundamental aspect of human reading behaviour that means ebooks will always be inferior. I like to believe that one day someone will come up with a way to improve the ebook user experience such that perceived problems with ebooks will disappear.

We then had a chorus of short talks about CHORUS – this is a publisher-led initiative to help (mostly) US universities and institutions to manage their open access responsibilities. I found little of great interest in this session. Syun Tutiya exchoed Rick Anderson in saying that the green road to OA does not work. He hoped that by 2020 Gold OA will account for a much bigger proportion of papers but also suggested that 100% gold is far, far in the future, or never!

Stephen Curry, Imperial College (who is a researcher, as well as a prominent open access advocate) gave an entertaining talk, including many recommendations for interesting books that we should read. Clearly he is not just a researcher but also a reader of distinction! He asserted that the motivations to undertake research are a mixture of wanting to understand the world, and to change it; to earn a living, to be remembered. The practicalities of becoming a successful researcher mean you should publish in a high impact factor journal. This creates a conflict – the ‘how’ can interfere with the ‘why’. Stephen is known as an advocate of open access and an enemy of the over-metrication of academia – he said that the conflict he described is eased if publishing is open and evaluation is broadly based. He outlined some practical suggestions for moving towards this ideal world, including widespread adoption of preprints.

His slides are available if you want to check out his book recommendations.

Laura Montgomery (Taylor and Francis) and Graham Walton (Loughborough Univ library)

It seems that User Experience (UX) research is all the rage in libraries these days.  The publisher Taylor & Francis sponsored some work in Loughborough University library to explore UX with postgraduate students. An initial literature review by Valerie Spezi  indicated that there wasn’t much research specifically on postgraduate students.

They mapped the UX of ten research students over 18 months, looking at how the students find and manage info, and seeking to identify opportunities to enhance the postgraduate research library in Loughborough.

The first step was to hold a workshop on how to get published, then they recruited participants from the workshop. Each was asked to keep a monthly diary over 8 months. They were set thematic questions each month. Some of the responses were predictable, some were surprising.

  • One student, asked whether they preferred print or ejournals responded “I didn’t know what a print journal was so I had to Google it to answer the question.”
  • When searching, most start at Google or Google Scholar. Many went to the library catalogue. Publisher websites were right at the bottom.
  • 43% said that the physical library not important for them at all. 39% said that the virtual library was very important.
  • Students took advice on information management tools from their colleagues, supervisors and lecturers. Graham Walton suggested that there is a need to train the supervisors.
  • Lack of access is a major frustration.

In questions, Michael Jubb pointed out that there has been quite a bit of similar work looking at researchers’ information and library use.

===============

The second day started with a panel discussion on copyright. This was informative and infuriating in equal measure.

Robert Harington (American Maths Society) stated the case for the publishers, basically in favour of the status quo. He touched on the difference between sciences and humanities, pointing out that scientific writing describes the intellectual stuff (ie patents are the real intellectual property), whereas in humanities the writing is the intellectual stuff. He emphasised the importance of the work publishers do in registering copyright, which is required in the USA if authors want to sue for copyright infringement. Registration is not required in the UK, so I was surprised to learn of this. (See Wikipedia for more on registering copyright).

Danny Kingsley (Cambridge Univ) gave an entertaining presentation making the case against how copyright currently works. She used a particularly egregious example from the music industry. But I didn’t think that such an extreme example made the case very well against copyright for research texts. She emphasised that copyright does not work and that publishers don’t need it – they just need a licence to publish the material.

Mark Thorley (NERC) made a strong case for changes in how copyright is applied to research outputs. He explained that research funders want research to be as widely and openly and readily available as possible, so that others can use and build on the research. He pointed out that research outputs should reach beyond research community.

Thorley said that the scholarly communication system must be made fit for a digital age. Barriers to access should be removed, so that any user can use research outputs for any purpose. We need to rethink how we apply copyright.

Summing up he said that copyright was not a benefit nor a detriment; it has not outlived its usefulness, but it needs amending to ensure fair exploitation in a digital world.

Alexander Ross is a lawyer specialising in publishing. He noted that copyright was introduced to deal with piracy of printed works, to protect the interests of creators.

Recently exceptions have been introduced to cater for research – for text and data mining and quotation. He said that it would be useful to have a fair dealing code for use in the academic sector.

He also noted that Creative Commons is a good attempt to standardise copyright for academic purposes.

In the general discussion session the speakers made further points:

  • Slicing and dicing rights is not helpful for research.
  • Copyright isn’t the barrier – its what the holder does with it.
  • Publishers requiring copyright transfer before they’ll publish research is the key problem.
  • Advice from publishers on why authors should assign copyright is confusing in the extreme (e.g. T&F).

Kent Anderson (CEO of Redlink and a regular contributor to the Scholarly Kitchen blog) spoke on his favourite subject, telling us about the marvellous work that publishers do.  His talk was based on a blog post of his that had been inspired by Clay Shirky saying that “publishing isn’t a job, it’s now a button”. I can see how that statement was a bit provocative and an oversimplification, though there’s a grain of truth in it. But Anderson pointed out that one key thing the publisher does it to shoulder the risks of publication, and another is to provide persistence, ensuring there are no broken links. I’m not persuaded that publishers are all that good at the persistence’ part.

Anderson then worked through a long list of everything else that publishers do, but I did not find many of them all that persuasive.  At one point he seemed to suggest that academic fields are defined by and only exist when there is a journal serving the field. Cart before the horse, much.

Tasha Mellins-Cohen (Highwire) spoke on access management. Some background to her talk is in a recent blogpost.

She noted that off-site access to subscribed content can be a problem, and that’s why people go to pirate sites. Publishers have an incentive to make off-site access work better therefore. Three publisher-led initiatives may help with this.

CASA: Campus activated subscriber access – more information about this will be coming in the next few months. It provides a way to record devices that have been authenticated by IP range. So when you use your device on the institutional wifi, and later use it off-campus the publisher website will still recognise you as an authenticated user.

SAMS Sigma – This is a roaming passport that does something similar to CASA but it can be a permanent link.

RA21 is a project of the STM publishers association. Mellins-Cohen said that while the principles of RA21 are excellent she worries that it might end up raising barriers to access rather than lowering them.

 

=================

Other summaries of the R2R conference have been written by Jennifer Harris and Danny Kingsley.

 

Posted in Copyright and IP, Journal publishing, Open Access | Comments Off on R2R – the Researcher to Reader conference

Weird things from publishers part 94

Many of the things that publishers do are perplexing, frustrating or reek of exploitation (it’s arguable that even the act of selling us subscriptions falls into the latter category) . I wrote earlier this year about a perplexing and frustrating example. Here’s some more.

A two-faced article

This article, in the Journal of the American Society for Mass Spectrometry can’t decide whether it is OA or not. Heck, it can’t even decide who it’s published by. Its bibliographic record in PubMed has two different links to the full text – one to Elsevier, marked as Open Access, and the other to SpringerLink.

The Springer link takes me to an abstract of the article showing it as an article published by Springer. It looks like something you need to subscribe to, or pay to access but a further link from that page in fact allows me to view the full text PDF. The article is marked as copyright American Society for Mass Spectrometry but I cannot see any statement to say that it is open access nor any Creative Commons licence. At the bottom of the PDF page it says “Published by Elsevier”.

The Elsevier link takes me to an HTML version of the whole article, on a page marked “Open Archive”. It also says it is “Under an Elsevier user license”.  I learnt that

“Articles published under an Elsevier user license are protected by copyright. Users may access, download, copy, translate, text and data mine (but may not redistribute, display or adapt) the articles for non-commercial purposes”

with various further conditions. The article is not available in PubMedCentral, which seems to me evidence that it is not genuinely open access, despite the label on the PubMed link to Elsevier.

I suspect the problem is that the article is in a journal owned by a learned society but published by a commercial publisher.  This is not always a good combination. Presumably the journal changed publisher and this created further confusion. I wonder why both versions are still extant?

Paying and paying and paying

Someone asked me to get them an article in a new Cell Press journal, Cell Systems. (Cell Press is an imprint of Elsevier). We purchased a copy of the article for the requester who then told us that they really wanted to get the supplementary information (SI) as well. We took a look at the publisher website, hoping the SI might be free. But it wasn’t free – it was going to cost us $22 to download it. Or rather, it would cost $22 to download one file.  This article has 9 separate SI files and each of them (as far as I can tell) will cost $22. My investigative instinct did not extend to trying to download all 9 SI files to check the full price really would be 9 times $22 = $198.

This single article seems an excellent argument against putting SI on a publisher website.

There does seem to be an option (Read-it-Now) to  pay a single price for the article and its SI all together, but this is a short-term ‘rental’ option and is not suitable for someone who wants to save and reuse the SI data.

How much is your APC?

The EMBO Journal is a moderately well-thought of journal, but I wouldn’t put it at the top of the prestige pile. I was therefore surprised to learn that if you want to publish an article there with immediate open access it will cost you $5,200. That puts it in the same price league as Cell and Nature Communications, both of which have an APC of $5,000. At least, they did last time I looked. And that’s the thing – I’m not in the habit of checking journal websites regularly to see what their APCs are this week. When I looked back to see what we had to pay last time (in early 2016) someone wanted to publish an article in EMBO Journal I found that it was just $3,900 – still a bit above the average APC for a journal published by a commercial publisher (EMBO J is published by Wiley) but quite a bit less than their current price.

I looked into it further and found  that there had been a large hike in all Wiley APC prices at the beginning of 2016 when the publisher changed its pricing model for open access. Up to the end of 2015 they charged an open access fee plus page charges of $95 per page. From 1 Jan 2016 they stopped charging the page charge and raised the APC to make the lost fees.

Part of me wants to applaud this transparency – having page charges on top of APCs seems like triple-dipping and in these days of online journals page charges are an expensive anachronism. Not to say a rip-off. But loading the extra cost onto the APC is still a rip-off.

I poked around the wayback machine a little to check on some of the facts about past APCs. Wiley’s spreadsheet listing all their APCs seems to be updated quite frequently, so I guess there are quite a few increases. I think it would be interesting to do this more systematically to track changes to APCs. (But not quite interesting enough that I felt moved to do it myself).

Freedom APCs

I found that I had a new Twitter follower – the Director of Cogent OA.  I’d not encountered this publisher before but quickly found that it is owned by Taylor and Francis (one of the big four) and it has thus far published 200 OA articles in 15 broad journals with a total of 122 sections. I’m not a fan of setting up large numbers of empty journals/sections in the hope that they will attract articles.

Cogent charges an APC of $1250 but this is marked as a ‘recommended’ price. Cogent are pioneering the idea of “Freedom APCs”.

As the first multi-disciplinary publisher to introduce Freedom APCs – a “pay what you want“ model – across the entire portfolio, our authors can choose how much they contribute towards open-access publishing based on their funding and financial circumstances.

I wonder how free the author really is to decide what they want to pay? Once again, it would interesting to do some experiments to test this out.

Interesting times

The landscape of publishing and open access grows ever more complex and confusing.  TO be absolutely on top of everything is really a full-time job, but few of us (certainly not me) are able to spend all our time on mastering everything that is changing.

Posted in Journal publishing, Open Access | 1 Comment

Open Research London – Oct 2016 meeting

I was very relieved that the Open Research London (ORL) meeting on 19 October 2016 went well. Jon Tennant, Ross Mounce and Liz Ing-Simmons established ORL in Jan 2015 but it faded away after a couple of meetings.  I’d been thinking for some time that I should start up the group again, but was a bit wary of the work involved.  The group’s founders were happy for me to have a go at re-establishing it and I found a willing co-organiser here at the Crick in Martin Jones.

About 55 people turned up to the spectacular new Crick building to hear talks on two publishing innovations and a talk / interactive workshop on open science workflows.

Preprints – bioRxiv

John Inglis is the co-founder of bioRxiv, the preprint service for the life sciences based at Cold Spring Harbor Laboratory (CSHL).  He is also the founding Executive Director and Publisher of Cold Spring Harbor Laboratory Press in New York, a not-for-profit publisher of journals, books, and online media in molecular and cellular biology. He spoke about bioRxiv at three years old.

First a definition. John defined a preprint as “a complete but unpublished manuscript yet to be certified by peer review”. The definition is all about what it’s not (i.e. peer-reviewed) but the key to preprints is all about the speed of dissemination of research results.

Preprints are currently distributed under two models. In the first, a manuscript is submitted to a journal which makes it available as a reprint and then puts it into a peer review process.  The journal publishes those preprints that pass peer review.  The business model is that of the journal – i.e. it is funded by APCs (article processing charges). An example of this model is F1000Research. In the second model manuscripts are submitted to a service dedicated to preprints. There is no fee and no peer review as part of the service. The service is supported by institutions and foundations. An example of this model is arXiv, supported by Cornell University and others. It gets 100,000 submissions p.a.

John explained that CSHL is committed to science communication so bioRxiv was a natural extension to its activities.

Modelled on the arXiv, it is a dedicated, publisher-neutral service. The time seemed right to launch it in 2013 as there was a new enthusiasm for openness, a greater acceptance of digital resources and practices, and increased posting to the quantitative biology section on the arXiv showed that some biologists at least were ready for preprints.

In building bioRxiv the founders aimed for a simple, reliable submission system, and wanted to ensure that authors would have total control over their content (using a variety of licenses).

John Inglis is always careful to talk about bioRxiv  in terms of ‘posting’ and ‘distribution’ rather than publishing. There is no peer review component of the service.

Manuscripts submitted to biorXiv are allocated to one of 26 subject categories then checked for scope and format, and subjected to a plagiarism check. Some basic checks are made to confirm that the submission is scientific in nature, is a report of research (rather than just a hypothesis) and whether there are human health implications. In the latter case some further screening is needed.

If all checks are passed then a few hours later it will be posted on the website. The author can post revisions at any time. biorXiv gets information out there very quickly.

In its first three years bioRxiv has posted 6200 manuscripts.  Disappointingly not many of them are confirmatory or contradictory results.  25% of them have been revised at least once. There are more submissions in biology than in health.

Submissions come from 2900 institutions, across 82 countries. Submissions are increasing, currently standing at 450 per month.

Authors seem to like it. Transmission of research results is very rapid. They get useful feedback and their preprints are read. 11% of the preprints receive comments and there have been many blogposts and tweets about bioRxiv preprints – 93k tweets in the past 12 months.

Institutions and funders are starting to accept preprints as evidence of productivity. Both NIH and Wellcome are considering their policies on accepting preprints in grant applications. Rockefeller University has accepted preprints as part of a CV. EMBO accepts preprints on CVs for fellowship applications. Many journals will consider preprints for publication as papers, including Nature, Science and Cell.  PLOS Genetics has appointed editors specifically to search bioRxiv for potential manuscripts to publish in the journal.  CSHL has undertaken a good deal of advocacy with learned societies and publishers. Several publishers now offer a “submit your manuscript to bioRxiv” button as part of their submission process.

It’s time to start sending your manuscripts to bioRxiv!

Wellcome Open Research

Robert Kiley is Head of Digital Services at the Wellcome Library and is currently acting as Wellcome’s Open Research Development Lead, responsible for developing a new open research strategy for the Wellcome Trust. Over the past decade Robert has played a leading role in the implementation of the Trust’s open access policy and overseeing the development of the Europe PubMed Central repository. He is also the Trust’s point of contact for eLife. He spoke about the Trust’s latest publishing initiative: Wellcome Open Research.

Just as with bioRxiv, the purpose of WOR is to make research communication faster, and more transparent.

WOR is fast, inclusive (you can publish everything – not just standard research narratives), open, reproducible (data is published alongside the article), transparent (it uses open peer review), and easy (the costs are met directly by Wellcome). The only drawback is that it is only available for outputs from Wellcome-funded researchers.

In early submissions, they have seen a range of publication types and a range of researchers from senior to junior. Submissions have come from a range of institutions.

The open peer review process allows for one of three decisions from each reviewer: approved; approved with reservations; not approved. If a preprint gets two ‘approved’ decisions then it is indexed in PubMed and deposited in ePMC. The is the same process used by F1000Research, which provides publishing services for WOR.

Wellcome hope to attract a range of outputs from a range of researchers. They hope that other funders will in due course emulate their initiative.

Open science workflows

orl-oct16-1

Bianca Kramer and Jeroen Bosman are subject specialist librarians from Utrecht University who have considerable expertise in scholarly communication and research workflow tools. Together they led the global survey in Innovations in Scholarly Communication. Their talk, Open Science workflows: Tools and practices to make your research more open, reproducible and efficient promised us a glimpse of the future.

orl-oct16-2

One of the slides from Bianca Kramer and Jeroen Bosman’s talk

They started by showing some diagrams of research workflows – nice well-behaved cycles with boxes and arrows. But life is not so simple. To make the workflow more realistic Jeroen added cycles within cycles, put in some dead ends and some repeats. By the end it looked more like a game of snakes and ladders.

We then learnt a little about the survey of tools that Bianca and Jeroen had made, and how they had categorised different tools as either:

  • Traditional (same as print era)
  • Modern (internet era)
  • Innovative (social media, collaborative aspect)
  • Experimental (new tools, developing tools, startups)

Lest we become too focused on technology and tools though they emphasised that research workflows are less about tools and more about work practices and people.

Bianca and Jeroen are nothing if not practical and they are happiest when considering reality instead of abstractions.  The remainder of their presentation was in the form of a workshop, an endeavour to define a set of work practices that would constitute an open science workflow, and tools that would support it.

The scale of their ambition for the workshop, and the amount of preparation they had done, astounded me. There were three or stages to this exercise.

orl-oct16-3

The final result of Bianca Kramer and Jeroen Bosman’s workshop

Each seat had a small paper button with the name of a research tool on it.  We were first invited to review the tool on our own seat, consider its usage and application in open research, and then to discuss this with our neighbour. Around the periphery of the room a number of research practices had been arranged, grouped according to the different stages of the research cycle. We were next asked to affix our paper button to an appropriate research practice. Finally, we were each asked to move one of the research practices that we considered to be part of an open science workflow and place it in a new area at the back of the room, together with any tool we thought would help with the practice.  This back wall filled up with open research practices and tools.

As a participant it was quite challenging to grasp all of this, and a bit mind-boggling to assimilate everything that was on the back wall. I suspect that Bianca and Jeroen, who live and breathe this stuff, had a clearer picture than anyone else n the room. It certainly stimulated us to focus on the role that tools can play in open research practices, and on what an open research practice is. I liked the way too that the whole process of the workshop was open and transparent.  Everyone in the room had a sense of the task we’d been set and of how the solution to it was emerging – we could see it on the walls. This really was a great model for open research.

Bianca and Jeroen’s mixture of openness and outreach is a great combination.

You can read their own account of the workshop on their blog.

And that was Open Research London, Oct 2016. We’re planning the next one already – expect it to be Jan or Feb 2017. Hope to see you there! Find out more about Open Research London on the website or follow it on Twitter.

Posted in Open Access | Comments Off on Open Research London – Oct 2016 meeting

Rapid or vapid?

Someone recently asked me what I thought about the open access journal Molecular Metabolism. I had just delivered a short talk to a group of researchers as a reminder about our open access policy and what my team could do to help them make sure their research was published open access.

Well, I didn’t think anything about Molecular Metabolism as I had never come across it before. In case you wondered, it’s been going since 2012 and has been published monthly for the past 2 years and more, so it seems to be reasonably well-established. Over that time it’s published about 350 articles, some of them having modest citation impact but nothing earth-shattering.  It’s not exciting in any way, but it seems an entirely worthy publishing venue. It is supported by the German Center for Diabetes Research (DZD) and the Helmholtz Diabetes Centre so it has roots in the diabetes and metabolism research community. And it’s published by Elsevier as a fully OA journal.

On the journal’s website it describes itself as

a platform reporting breakthroughs from all stages of the discovery and development of novel and improved personalized medicines for obesity, diabetes and associated diseases.

As well as being an OA journal it has adopted a rapid publication model.  Its peer review instructions require reviews to be delivered in 72 hours. Reviewers are asked only to accept, reject, or suggest minor revisions.

My enquirer outlined this rapid process and asked what I thought. It seemed a good model to me. It’s not a high grade journal, so long rounds of manuscript revision would be a waste of everyone’s time. The option to reject papers is still there.

My enquirer had been invited to review an article submitted to the journal. His response was that the journal must be a scam. He viewed their rapid review policy as an invitation to reviewers to accept substandard papers. My sense was that he was also implying a Bohannonian subtext of “and all OA journals must be rubbish, innit?”.  I suspect he has the same view about PLOS ONE.

This little exchange gave me more information about the person asking the question than about the journal he enquired of. It seems a shame that in some quarters OA, and the idea of the ‘sound science’ peer review is still regarded with such suspicion.

Posted in Journal publishing, Open Access, Peer review | Comments Off on Rapid or vapid?

Librarygeddon

The Library, the collection

When it’s done right it is a wonderful thing. The collection dedicated to meeting a specific need: carefully selected, sensibly arranged, appropriately indexed, comprehensive in its coverage and range of formats. It is precisely calibrated to meet a need. On its own it is a collection – worthy of celebration. But put the collection together with a community of users and a team of knowledgeable library staff and it turns into a Library – a boon to civilisation and scholarship.

To my way of thinking it is the community of users that comes first.  Perhaps this is librarianship’s chicken and egg situation – “Which comes first, the reader or the book?”.  Ranganathan answers the question by saying, in effect, “both”. His famous five laws of library science include these two:

Every reader their book
Every book its reader

Neil Gaiman has written thoughtfully about the value of public libraries as “cultural seed corn” and he hails the way that libraries encourage knowledge discovery:

there’s nothing quite like the glorious serendipity of finding a book you didn’t know you wanted to read. Anybody online can find a book they know they want to read, but it’s so much harder to find a book you didn’t know you want to read.

Public libraries are under great threat, even though they have many vocal defenders. Other kinds of libraries are equally under threat but don’t attract the same passionate advocates.

My career has been in special libraries, specifically what are sometimes called workplace libraries, where the raison d’etre for the library is the needs of a group of people in a workplace. In special libraries it is definitely the reader who comes before the collection.

Information needs are expressed by users, and captured by the librarian, who must take actions to meet those needs. The user needs define the shape and contents of the collection, and its organisation. In an engineering library you may find a small section of books on ‘medicine’, probably all lumped together under one classification mark. In a medical library those books would need to be classified in far more detail (dividing them into finer shades of topics), but any books on ‘engineering’ would be lumped into a single section. Thus, indexing and classification in the library must be determined by the needs of the users. Classification is relative, not absolute. A creative librarian will also develop services on top of the collection to meet particular needs of the community of users.

“I have always imagined that Paradise will be a kind of library.” – Jorge Luis Borges

The library when NIMR was at Hampstead, ca 1930

Growth

The library’s collection policy is an expression of its approach to managing the collection. It defines what to acquire, what to keep, and what to discard. It may aim for homeostasis (a fixed size) or it may attempt to keep hold of everything it has ever acquired (continual expansion).  Often libraries will make use of secondary storage space, where less-frequently required items are sent. (A bit like sending your old possessions up to the loft, or into the garage). A mature library collection, with a long historical overhang in secondary storage, can be seen as a record of the organisation’s interests and development. It reflects some of the parent organisation’s history.

A library devoted to a single subject for a long period of time, e.g. a learned society library like the Linnean Society Library, becomes an unrivalled repository of knowledge about the subject.

“To build up a library is to create a life. It’s never just a random collection of books” – Carlos María Domínguez

Over the years the library’s users and their needs may change, and this will impact on the collection and services too. Of course library staff may also change, and while they will strive for continuity there will inevitably be changes in emphasis as new staff replace old staff and make their mark with their own particular quirks.

Another of Ranagathan’s laws is:

The library is a growing organism

The library cannot stand still. Space limitations may mean that it cannot grow indefinitely large, but it still needs to live – to develop as user needs change and publishing trends change.  The library will discard (or relegate) no longer needed items as well as ingest new items. It must also exploit new developments in technology and the information industry, and adapt to changes in the broader environment.

“The only thing that you absolutely have to know, is the location of the library.” – Albert Einstein

The NIMR library at Mill Hill, 2011

Radical change

There should be a tight link between the collection, its users and the workplace, so that they change in harmony. But if an organisation undergoes a more radical change then the collection can come under threat. When organisations close, restructure, split, or suffer severe financial troubles, then there will be dire consequences for the library service and collection. This is the way of the world but, especially for longer-established collections, it can be a cause for some regret. Simon Barron has pointed out that sometimes there is a conflict between those who worship books regardless of use and the drive for efficient collection management. I saw a project advertised a couple of years back about the “non-economic benefits of culture“. I wonder whether it included a strand about the cultural value of libraries (of all kinds)?

A few years back I was the membership secretary for CHILL (a grouping of independent health-related libraries). These all count as special libraries, many of them relatively small. It was salutary to count up each year how many members had fallen by the wayside in the preceding 12 months. One member was a charity called Drugscope which had a fantastic information service on addiction, run by a dynamic information professional. But one year some of the charity’s key funding grants were not renewed, leaving a funding hole. There was no money to fund the information service. It just closed. Apparently it was not the only one. An editorial in Addiction in 2012 said: “Special libraries in the addiction field have been downsized or closed at an alarming rate during the past decade”.

The Royal College of Midwives also had an enviable collection and information services run by knowledgeable staff. Financial pressures led to closure. Happily the Library of the Royal College of Obstetrics and Gynaecology stepped in to rescue the collection in this case, or it would have been lost. Sadly the knowledgeable staff were lost. When the Royal Pharmaceutical Society of Great Britain was split, into a professional society and a regulatory body, this also occasioned a massive change for the library. Retaining a large historical collection was no longer economically feasible and a “lean and mean” approach was adopted instead. Large numbers of older items were disposed of.

Anecdotally I have heard that many Government department libraries have closed or drastically downsized in recent years, due to restructuring, merging or closure of departments.

Of course it’s important that libraries provide useful services in an efficient manner, but the inevitable consequence of rapid organisational change is the loss of many items of real historical interest.

This kind of loss is a global phenomenon. In 2014 it was reported that:

Scientists in Canada are up in arms over the recent closure of more than a dozen federal science libraries … Scientists fear that valuable archival information is being lost. Of most concern is the so-called grey literature — information such as conference proceedings and internal departmental reports.

We have a worldwide librarygeddon – loss or destruction of many specialist collections. Does it matter? Maybe, but probably not enough that we can do anything to change it.

Empty shelves in the NIMR Library gallery

They’re all empty now

NIMR Library heritage

My interest in this topic began in 2003, when I first learnt of plans for my Institute to be moved. The plans changed over time, but it has been blindingly evident to me right from 2003 that if the Institute moved from Mill Hill it would spell the end for our Library collections. I have got used to the idea and I can think of no valid argument why things should be any different, but I still regret that things have to be this way.

The NIMR library began in 1920 when the Institute moved into its Hampstead home. The aim of the library was to support scientific medical research at the Institute and to be a resource for the nation’s medical researchers. Work to assemble the collection had begun prior to 1920, collecting materials from around the world. Many items were received as donations, or through exchange programs. The MRC published Medical Science: abstracts and reviews and its highly-regarded Special Report Series, so it had valuable publications to offer in return for free journals and reports from other organisations. In those colonial days, reports (grey literature) were received from research organisations across the British Empire and the rest of the world.  We even used to receive The Lancet free of charge (though this changed when Elsevier bought the journal). As well as books, journals and reports, the Library has an extensive collection of pamphlets and reprints. The collection represents a unique period of British medical research, when Hampstead and later Mill Hill were at its centre.

The reprints were largely collected by individual scientists, I believe, and periodically deposited in the Library. They were all carefully catalogued and subject indexed – we have a vast rank of filing drawers filled with index cards. I think of the reprint collection as the intellectual encrustation of the Institute’s research, from the 1920s to the 1960s. Reprints have become unfashionable and few libraries are interested in taking large collections of reprints. But I think much of the value of the reprint collection is in the catalogue rather than the reprints themselves. It is a unique historical document of 20th century biomedical science.

It’s a strange thing, but when scientific records/publications become old they often have less value to science, but more value to the humanities, especially history. Just by keeping hold of our science literature collections as they aged so we acquired a history collection. The trouble was, we were not funded for historical research but for scientific research. Therein lies the problem I faced. Looking back, I can see many ways that I could have handled it better but I did as best as I could at the time.

It’s a big project

The simplest thing to do would have been to just throw everything away, and for a while I was afraid that was exactly what I would have to do. Happily, I gained approval (and resources) from on high to undertake a few projects to help ensure that those parts of the collection with most value could be dispersed responsibly.

There were about 3km of shelving filled with printed materials that needed to be disposed of. Printed journal volumes accounted for much of the shelf space.  The pace of digitisation of journals is such that it is very hard to find homes for printed journals. We did find homes in other libraries for a small quantity of journals, but most of the journals were sent for recycling.

Preliminary enquiries made it very clear that few libraries had an interest in more than a small proportion of the other items. That is not surprising – libraries can only collect what matches their strategic needs, and their collection policies (vide supra). The lists that we prepared of the most desirable groups of items – the pamphlets, reports and older books – have helped us to find homes for quite a few items. The 5,000 pamphlets in particular have gone as a single block to a major library. I tweeted pictures of quite a few of the books that have been transferred to other libraries, under the hashtag #nimrlibrarybyebye. A few hundred have found homes in libraries, mostly in London but some further afield.

A few hundred books have been retained to form a historical collection in the new institute – more of this in a later post. Some others have been retained by some of the labs for their use. And a few hundred have been transferred to members of staff as mementoes of NIMR. But several thousand items will go to the secondhand trade – I hope they will eventually make their way to collectors’ shelves.

I should emphasise that most items in the NIMR Library collection are not unique – probably not even rare by the standards of rare book collectors. And not really old either. Most of the collection was 20th century.

It is not so much the items themselves that I am mourning here, but the collection of all the items.  It’s the whole collection that is greater than the sum of its parts, the connections between all the different documents and the context of the Institute . This notion (value of the collection versus value of individual items) is a bit intangible, and unquantifiable; perhaps not a thing at all. That’s why when it is all gone (quite soon now), probably no-one will notice, or remember that it was there, apart from me.  I will remember.

You can look right through the empty shelves in our store

Empty stacks in the library store

There’s about 3km of empty shelves now

Posted in Books, Collections, Libraries and librarians | 2 Comments

Paradigm – the sculpture

Recently I attended the first public event at the Francis Crick Institute’s new building next to St Pancras.  Ironically the event was not about science but was a conversation with an artist, sculptor Conrad Shawcross. He created the enormous sculpture outside the new Crick building. The sculpture is called Paradigm, and was inspired by the thought of the ground-breaking, paradigm-shifting science that the Crick was created for.

In conversation with Ken Arnold, Creative Director at the Wellcome Trust, Conrad revealed something about his inspirations and reasons for creating Paradigm.  By the end of the evening we had learnt something about the artist and something about the sculpture. And possibly something about science as well.

Introduction

Katie Matthews, the Crick’s Director of Engagement, introduced the event and reminded us that the Crick is all about collaboration. Hence featuring a collaboration between a science institute and a sculptor is par for the course for Crick.

Ken Arnold continued by reminding us that we have now had 20 years of the sciart phenomenon.  Wellcome’s Sciart funding programme was launched in 1996 and concluded in 2006 but lives on in spirit. It was originally aimed “to fund visual arts projects which involved an artist and a scientist in collaboration to research, develop and produce work which explored contemporary biological and medical science”. Science and art are getting closer together again after a long separation.

Ken Arnold did a good job of leading the interview/conversation with Conrad Shawcross. He continually probed Conrad with questions, extracting interesting comments from the artist and generating new lines of thought. At intervals Ken opened up the floor to questions from the audience which ensured a varied pace and style of conversation and kept the flow of ideas going.

At the start Ken asked for a show of hands whether most people in the audience came to the event because they were primarily interested in science (a few) or primarily interested in art (a majority) or because they were equally interested in both (quite a few, including me). For completeness he also asked whether there was anyone not really interested in either art or science.  No-one put their hand up to that!

paradigm-night-2

The sculpture

Conrad Shawcross has worked with tetrahedral shapes before – experimenting with many wooden tetrahedra and exploring how they fit together. They do not tesselate (i.e. can’t stack neatly). Instead if you join them together they can become a bit chaotic or can form a 3-sided helix. He talks about this in a recent New Scientist article about another recent work of his.

This is the basis of Paradigm – a series of tetrahedra in which each succeeding one is 10% bigger than the one below so that although the base is only 80 centimetres across, the top spans 5 metres. It is a twisting helix that looks balanced, but only just. This precariousness is intended to be a metaphor for a scientific paradigm.  An idea in science may be accepted but it’s quite possible that a new idea will come along one day and topple the old one. Hopefully the Crick will upset a few paradigms in the future (but I think that Paradigm the sculpture will remain standing).

Conrad revealed that his original plan was for the sculpture to be just 8m high, and in stainless steel. But Paul Nurse suggested that maybe it should be bigger, and Conrad agreed. The only trouble was, it wasn’t possible to make it that high with stainless steel (for cost reasons, I think).  The solution was to use rusty iron instead, which allowed him to add some height to make it 14 metres high. Conrad also talked about the complexity of having to meet the brief of a commission such as this whilst also producing something artistically valuable that would work on a site in a very public space. He said that working within such constraints helps him to address problems that he wouldn’t otherwise have thought about.

The artist

Conrad Shawcross has been a sculptor for 15 years, since leaving art school.  He noted that he had attended Ruskin and Slade art schools, both situated within broad-based Universities not specialist art institutions. He had appreciated the mixing with medics, scientist etc, and he preferred to place himself within the history of ideas, rather than just in an art history milieu.

He has always been interested in machines, and had enjoyed taking apart and fixing his old Leyland van. He was in awe of the complexity of the motor, and also enjoyed the terminology:  pinion, crank, cam were all words that he found interesting.

He noted that one similarity between art and science was the way they help people to visualise things. Science lets you visualise, or create a representation of, things that are smaller than the wavelength of light – things you cannot see. Artists also visualise things that can’t be seen – things that are invisible or inconceivable.

And finally

My favourite quote from Conrad Shawcross came towards the end of the event. He said:

With any ‘good’ piece of art, the artist has to surrender control of its meaning

I like the humility of that statement, and the acknowledgement that the viewer of an artwork is an active player in the process.  It also struck me as having a further resonance with science – in a similar vein a piece of scientific research is sent out into the world to be understood and used in ways that the original researcher cannot control. Both artists and scientists have to relinquish control of their intellectual offspring and hope that they are not toppled by a shift of taste or paradigm.

I walk past Paradigm nearly every day now. I’m not sure what meaning I put on it, but as I left the building after the event I gazed up at the sculpture lit up against the night sky and saw a beauty and elegance that I hadn’t noticed before. I’ll keep my eye on it in future, searching out meanings.

Paradigm at night. Photo by Frank Norman

Posted in Art | Comments Off on Paradigm – the sculpture

The Lasker book prize

Well, not really.

The 2016 Lasker~Koshland Special Achievement Award in Medical Science has been given to Bruce M. Alberts for “Discoveries in DNA replication, and leadership in science and education”.

Bruce Alberts

The citation on the Lasker Foundation website says:

In his research, Bruce M. Alberts (University of California, San Francisco) devised powerful experimental tools that helped him understand the mechanism by which cells copy DNA, thereby establishing a new paradigm of molecular machines that perform crucial physiological functions.

There is a lovely interview with Alberts in PLOS Genetics, mentioning his early ambition to solve genetic code, and a meeting with Francis Crick.

MBoC

But I’m a librarian and when I see the name ‘Bruce Alberts’ I immediately visualise one of the most popular books on the biology shelves: Molecular Biology of the Cell (MBoC). At Mill Hill it was one of the few books that we bought multiple copies of for the library, and you can be sure that many of the labs there also had a copy on their shelves. Despite that it was rare to find a copy of the latest edition on the library shelves. Our copies were usually on loan – officially or unofficially.

The Award citation acknowledges the importance of this book:

Aiming to share not only what he knew about biochemistry, but to teach students how to think like scientists, he teamed up with a small group of colleagues to write an innovative cell biology textbook, now in its 6th edition, that has inspired countless individuals worldwide to find joy in experimentation, discovery, and logical reasoning.

The book’s fame and influence have secured it an entry in Wikipedia:

Molecular Biology of the Cell is a cellular and molecular biology textbook …The book was first published in 1983 and is now in its sixth edition.  Molecular Biology of the Cell has been described as “the most influential cell biology textbook of its time”.

I looked back at what reviewers said when the book first came out in 1983. Writing in Cell John Cairns observed presciently:

This is a marvelous book and is going to attract a lot of attention… it is enormous and covers a vast array of subjects.

He went on to say:

The pictures are excellent, the text is straightforward and readable, and about once every ten pages we are given a summary. Facts are laid out before us most lucidly…

Perhaps the sign of the coming of age for any subject, even procaryotic molecular biology, is that the successive waves of teachers and students should no longer have to hear about the details. Perhaps this book will be seen to have signalled the end of an era and, in its second half, to have given us a taste of what is to come.

Throughout its length, the book is written in an unobtrusively lucid style, which is the mark of much tender loving care.

The fourth edition came out in 2002, and was the first to include a CDROM accompaniment. Writing in Nature Angus Lamond said of this edition:

A generation of students have learned the basics of molecular cell biology thanks in no small part to courses based on the pioneering textbook Molecular Biology of the Cell…Through three editions it has established an enviable record for high-quality presentation, with the authors showing a remarkable ability to make both basic concepts and cutting-edge research topics accessible to readers.

The new edition is even larger than its predecessors, reflecting the vigorous activity of the field and the inexorable expansion of detailed information regarding cellular processes and molecular structures and interactions. …the punctilious attention to detail and effort devoted by the authors to covering this huge field in a lucid and easy-to-read style shines through on every page.

More recently the science historian Norberto Serpente wrote an affectionate  article to mark 30 years of MBoC, in which he cites a number of other reviews of the book.

He notes:

The pedagogical value of MBoC, as most reviewers agreed, was to be found in the design and quality of the illustrations, which condensed complex ideas into simple schematics, and in the clarity, consistency and emphasis on explanation achieved in its writing.

The Goodreads website page for MBoC is a rich source of ‘reviews’ of, or comments on, the book. Many of these are pithier than the above quotes, but still pretty overwhelmingly positive. Some of my favourite comments there were:

One of the most comprehensive cell biology books that served as a great reference for the start of my biology career.

Why read the bible if you could read this instead?

I learned a lot from this book. I give it a 5 because it is a great paper weight.

My biggest problem with this text is that it is really heavy. I actually dropped it and broke two toes.

Honouring great science books

The Lasker prize has form when it comes to celebrating great books. In 2012 the Lasker prize was awarded to Tom Maniatis.  The citation included this:

Maniatis created the quintessential Molecular Cloning manual—based on his own pioneering work—and thus spread revolutionary technologies into a multitude of laboratories across the world.

I wonder who should be next – what other scientists have combined great achievements in the lab with genuinely groundbreaking book publishing?

It’s interesting that the books by Maniatis and Alberts are both in the field of molecular biology. This field inspired a revolution in the way we approach biological problems and both books played their part in facilitating the spread of the revolution.

I’d like to nominate David Lipman for his work in developing the NCBI services, including PubMed. I’m not familiar with his work outside NCBI, but Wikipedia tells me that:

He is most well known for his work on a series of sequence similarity algorithm, starting from the Wilbur-Lipman algorithm in 1983, FASTA search search in 1985, BLAST in 1990, and Gapped BLAST and PSI-BLAST in 1997

Who would you nominate?

Posted in Books | Tagged | Comments Off on The Lasker book prize

Changing horizons

Much years. So change. Sniff. 

I have left the place where I worked for the past (almost) 27 years and I have started in a new place of work. It’s the same employer that I had before, but I’m in a new environment and a new building, with many new colleagues. I will return to the old place a few more times over the next three months but then that’s it.

I’ve never been in this position before, leaving a familiar place after so long. When I left home to go to University it was exciting with a little trepidation thrown in, but home was still going to be there to return to if I needed it. I left University with fond memories of the place and the life but I looked forward to entering the world of work.  I’ve moved job a couple more times, changing county and country, with the same mixture of regrets and excitement. But none of those previous moves felt as big a deal as this week’s uprooting from one place to another place.

In the end there were no tears, just an orderly frenzy of packing, tidying up, and making sure the ex-library space no longer looked like it was a library service. More sweat than tears.

Our crates being taken away.

My old office, emptied

I will lift mine eyes unto Mill Hill

I started at Mill Hill on 27 November 1989. My first ID card featured this old polaroid photo, taken on my first day at Mill Hill.

My photo ID in 1989

I remember that the Institute seemed full of strangeness and unfamiliar smells (particularly the pungent materials used to clean the cork floors). The Library environment was more familiar and I quickly found my feet.  On my very first day someone came along wanting an online literature search carried out (in 1989 this was a big part of my job). I was happy to find that the systems used in this Library were the same as I’d been used to in my previous job so the mechanics of getting online and carrying out a search were straightforward.  Less familiar was the subject matter – I’d been used to running searches for clinicians in a hospital and now here were researchers wanting molecular biology and immunology searches. It stretched me, but I found I could be quite elastic.

There was no internet. Email was a rarity (not to say an eccentricity). Journals were 100% printed. The online searching that I was responsible for was the most advanced service available, but CDROMs were soon to open the way to end-user searching. This led to me becoming a trainer – running search skills sessions. I quickly found my way to JANET, and new ways of connecting to people and information. A flood of new information was coming, from the internet. This was a fascinating new challenge, and kept me busy for a while.

The Library, on the fourth floor of the building, commanded fine views. On one side were green fields, on the other  suburbs and hills that just obscured most of central London. When I looked out of the window to the southeast I could see, peeping over the hill, the Post Office Tower and the NatWest building. To the southwest I could see the old Wembley stadium building. In between was the ridge of the North Downs, and on fine days you could see the Mole gap.

Today the new Wembley stadium building, with its big arch, is visible to the southwest, while several new towers are visible on the horizon to the southeast, evidence of London’s high-rise boom. The Mole gap is still there.

The Mole gap, where the River Mole cuts through the North Downs.

Natwest Tower, Gherkin, Walkie Talkie, and others

The Shard

Canary Wharf towers

But now, Lord, what do I look for? My hope is in Crick

My first day in the new building was 30 August 2016. There are no funny smells – everything is clean and the building is air-conditioned. My new photo ID leaves something to be desired, but at least there is no moustache this time.

My new photo. I look like I’ve been in the sun or had a few drinks.

After a welcome talk and obligatory safety and security talks I settled in to unpacking, putting stuff away, and sorting out IT and comms. The latter are more advanced than I’d been accustomed to and offer more possibilities for flexibility.

It didn’t take long to feel at home. My immediate colleagues have moved with me from Mill Hill, but we are now embedded within a much larger group of people.  I’ve met many of these new colleagues several times already, and have visited the building several times too (in various stages of completion). There’s no sudden change in my duties, so I have just picked up working on what I’d been doing the previous Friday. The open plan office design is different from what I’m used to, but I’m able to switch off from the surroundings and zone in to focus on what I need to.

Over lunch on the first day I met with two computational biologists – one I knew from Mill Hill, one I’d not met before was from the Lincoln’s Inn Fields site. We had an interesting chat about open access, ORCID, PubMedCentral and pseudo-repositories (e.g. ResearchGate). I look forward to more such conversations.

It will be a few more months before the Institute is complete – with all the research groups in the building and everything arranged as it should be. About two-thirds of my role is clear and defined but my challenge now is to define the remaining one-third. I need to reach up and see over the horizon again, to find the right direction for Information Services at the Crick.

 

Posted in Libraries and librarians | Comments Off on Changing horizons