Moon Boy

After splashdown at 4:51 pm on 24th July 1969 the Apollo 11 astronauts returning from the first moon landing  had to don full-body Biological Isolation Garments before they could leave the conical command module that was bobbing in the Pacific Ocean. Having transferred to the dingy that had come to meet them, Neil Armstrong, Buzz Aldrin and Michael Collins then had to scrub one another’s suits with diluted bleach.

From the dingy the astronauts were helicoptered to the aircraft carrier USS Hornet and immediately sealed inside a windowed metal box, the Mobile Quarantine Facility, in which they were cocooned for the voyage back to Hawaii and the flight to the Manned Spacecraft Center in Houston. There they could finally move into the much roomier quarters of the Lunar Receiving Laboratory but were still sealed off from the outside world until August 10th, fully 21 days after Armstrong and Aldrin had quit the lunar surface. It was an ignominious end to an amazing journey.

All of these precautions were to protect the Earth’s population from any infection that might have returned with them from the moon. But NASA’s quarantine measures were only partially successful. The planet may not have succumbed to some unknown lunar disease but I certainly got infected with something, and I suspect many others of my age did too.

I have been living with the condition over forty years now. The symptoms are largely benign — a low-level compulsiveness that means I have to take at least a passing interest in every rocket-powered foray into space. That I can cope with. But every so often there is a flare-up and I find myself yielding to a full-blown obsession. This time it was triggered by reading Carrying the Fire, Michael Collins’ account of his time as an astronaut with NASA. Once I started I couldn’t stop. It’s the oldest cliché in publishing but, just like an object in zero gravity*, the book was unputdownable.

Carrying the Fire

Of the three Apollo 11 astronauts Collins comes across as the most approachable. Armstrong was famously reclusive after his return to Earth while Aldrin suffered bouts of depression and battled with a drink problem. Though he wrote a candid account of his voyage to the moon and his struggles to re-adjust to life after the Apollo program, I found Aldrin’s book a bit of a mess. Collins is more level-headed. You get a good sense of the man from his contributions to the 2007 documentary, In the Shadow of the Moon, and the same friendly acuity comes across from the pages of his book.

Carrying the Fire gives a straight-forward account of Collins’ graduation from Air Force test pilot to NASA’s astronaut program in 1963 (having failed on his first attempt). He details the preparations and experiences of his first flight into space on Gemini 8, a program that aimed — among other things — to work out docking procedures in Earth orbit. The second half of the book is devoted to the historic Apollo 11 mission. It’s not a book for the casual reader looking for a story engineered to be dramatic. There is no shortage of tension and excitement but it emerges unadorned from the narrative. Collins combines a keen eye for detail with a disarming and sometimes brutal frankness when assessing himself and his crew-mates.

For me it was access to the sheer complexity of the planning and execution, and the many unexpected twists and turns, of the Gemini and Apollo missions that gripped the most. The story has a compelling blend of unimaginable risk-taking in a hostile environment with the camaraderie and jealousies that pushed and pulled at NASA’s first spacemen. In my mind it forged a new connection to the astonishing ambition of the moonshot, an event that hugely expanded humankind’s sense of itself. Collins is at his best when recalling the strange nitty-gritty of spaceflight but struggles to convey what the trip really meant for himself, Aldrin and Armstrong and is only partially successful in doing so. He observes wryly that it may have been odd to assign humankind’s greatest voyage of exploration to a bunch of macho and taciturn test pilots.

But he does at least try and I am grateful for that. I wonder if my generation connects more intensely with the moon landings because we lived through them. Even though my memories are faint and immature — I was only five years old in the summer of 1969 — it is an event that still resonates with inspiration. Within not so many years all the astronauts will be dead and not long after those who witnessed it on TV will be gone too. When that time comes, I hope that Collins’ book might still keep the flame alive.

*Strictly, ‘zero-gravity’ is the wrong term and I should say ‘free-fall’ but frankly zero-gravity is a better descriptor and I’m sticking with it.

Posted in History of Science, Technology | Tagged , , | 8 Comments

The horror is in the detail

I recently came across a film on YouTube called ‘Unedited footage of the bombing of Nagasaki (silent)’. It is one of the dullest and most horrific things I have ever seen. The film shows US servicemen on Tinian island performing the banal tasks needed to prepare the Fat Man bomb before it was dropped on Japan. We see men, stripped to their waists in the summer heat, desultorily spraying the bomb with a water-proofing agent to keep the innards dry during its plunge through the atmosphere; we see them load the bomb onto a trailer, cover it with a tarpaulin (to hide the casing design from prying eyes) and follow it in slow procession to the airfield; we see it lowered gently into a pit and the pilot of the B29 Superfortress reverse his plane so that the bomb can be hoisted into the belly of the aircraft.

The film then cuts to footage of the plane gleaming in the sun on its final approach to Nagasaki. And then quickly cuts to a view of the ground beneath, where a mushroom cloud is already blooming and rising slowly into the morning sky. No details are visible on the ground so there is no sense of scale, and no notion of the hell that has just been unleashed on the unsuspecting population. Forty thousand souls obliterated in an instant and thousands more who will soon be dead through blast injuries and fire.

We know of the horror, because we have seen films — I find them almost unwatchable — of the devastation of the city and the blank, uncomprehending faces of the injured, their burnt skin detaching from their bodies like so much tissue paper.

But there is no sense of this in the film of the preparation. The men are almost casual as they go about their business. Some stand around while others work. The bomb has been signed by many of them. One wag has added the sardonic acronym ‘JANCFU’ (Joint Army and Navy Command Fuck Up), not realising, I imagine, quite how badly Fat man was going to fuck things up. But occasionally, in the course of an ordinary day, we change the world.

Posted in History of Science | Tagged , , | 3 Comments

Impressions of Turner

I may not know much about art but I know what I like and I like the work of Joseph Mallord William Turner — all the more so now that I have seen the Turner and the Sea exhibition at the National Maritime Museum. The artist painted a variety of landscapes over the course of a long life but is probably best known for his seascapes (though I can’t be entirely sure since I don’t know much about art). In any case those are the works that most appeal to me, an aficionado of Patrick O’Brian’s Aubrey-Maturin novels. O’Brian’s writing brings the sea vividly to life in the imagination but the Turner exhibition is a chance to revel in it splashed over canvas.

‘Splashed’ is the wrong word. Turner’s application of paint was of course supremely artful and the real delight of the works on show in Greenwich is to be able to see how his artistic vision developed.

Born in 1775, Turner’s talent became evident in his teenage years; he exhibited his first oil painting at the Royal Academy in London in 1796. Fishermen at Sea displays considerable skill at rendering the play of light on water though I didn’t care for this early picture so very much — the colour and composition seemed too forced.

I preferred the darker mood of The Shipwreck painted nearly 10 years later. Strangely, however, if you look closely, the evident drama of the scene is somewhat undermined by the way the people holding on for dear life depicted — the figures have the simplicity of a child’s story book. Perhaps it’s unfair to look too closely but in some ways the engravings from his Liber Studorium seem to use his skill to better effect — they have an almost photographic immediacy.


The Shipwreck (1805) — from the Tate Collection (click link for a larger view)


That said, the exhibition is dominated by the paintings — and rightly so. Turner’s The Battle of Trafalgar, 21 October 1805 is particularly dominant; at 3.6 m x 2.6 m it is by far the largest painting in the show. Nelson’s flagship, HMS Victory, commands the scene but is surrounded by and succumbing to the bloody and smoky chaos of battle. 


The Battle of Trafalgar, 21 October 1805 (painted 1824) — National Maritime Museum


It’s a complex composition; there is action in every corner and the painting repays prolonged viewing though, truth be told, I found Stanfield’s HMS ‘Victory’ Towed into Gibraltar more moving and Pocock’s earlier and simpler painting, Battle of the First of June more dramatic and arresting.


 Battle of the First of June (1794) — National Maritime Museum


Ultimately it was Turner’s later works that impressed the most, as he strived less to render the precise form of the sea and more to capture the sense or feeling of it. One of my favourite paintings in the exhibition is Fishing Boats Bringing a Disabled Ship into Port Ruysdael, which was first shown in 1844. There is a melding of land, sea, sky and boats that is a bold attempt to convey the whole experience of being at sea.


Fishing Boats Bringing a Disabled Ship into Port Ruysdael (1844) — Tate Collection


The blurring of boundaries foreshadows the emergence of impressionism and is seen also in a work from the same period, Waves breaking against the wind. As the caption put it, the picture was an attempt to portray the endless repletion but infinite forms of the waves beating against the shore. It is a painting of mesmerising, almost puzzling beauty. It baffles me that the image breaks down completely as you move in to examine the application of paint. And what is it about that wash of yellow at the upper right that is so appealing? I’ve never seen a sky that colour and yet it still has me convinced.


 Waves breaking against the wind (1840) — Tate Collection


And finally (in this mini-tour), there is Snow Storm Steam Boat off a Harbour, which is even more impressionistic. There is a clear artifice in the way that Turner’s bolder strokes bind the sea and sky together as the elements swirl around the struggling steamboat; and yet, again, it works.


Snow storm steam boat off a harbour (1842) — Tate Collection


But please don’t take my word for it, or rely too heavily on the miniature renditions included in this post for an impression of Turner’s art. Go and see the exhibition for yourself. You have until April 21st.

Posted in Science & Art | Tagged , , | 14 Comments

Unfinished Business

I’ve reached that age where my eye is drawn to the obituary column every time I open the newspaper. It hasn’t been a conscious move but, having arrived at my fiftieth year, I am increasingly aware of the hopes of youth shedding and floating away, like leaves from a tree, and find myself more often looking back over the road now travelled than peering into the future at the way ahead. I have a vague sense that there has to be an accommodation — some kind of reckoning — but I’m not sure how to achieve that. I’ve not been this way before, and I suppose that might explain my interest in the report cards of those who have.

Thinking on the beach

Youth is such a beguiling fantasy, provided of course you are lucky enough to be born into the relative comfort and safety of a developed existence and a supportive family. As you grow up and learn more of the possibilities that life has to offer, a world of potential fulfilment unfurls before you: places to visit, people to meet, books to read, ideas to wrestle with, disciplines to master, the chance to become someone, to make something of yourself.

I have travelled that road and worked hard along the way. School, university and a PhD, followed by stints in labs in France, England and America that took me to my early thirties. From there I gained a foothold as a junior lecturer at a university where, a dozen years later I became a professor of structural biology, a position I hold today. I have published over eighty scientific papers, and lectured to my peers on three continents. I would appear to have attained my academic acme; to outsiders it must seem like the completion of a magnificent destiny.

The only trouble is, it is nothing of the sort. I have little sense of being the finished article. What I am and what I know is incomplete.

To some it may seem odd for a fully-fledged professor to make such an admission but there it is. I am not insensible of my scientific achievements, such as they are, but when I look at the whole, it is the holes that stand out. I was probably not helped by having relocated to biochemistry early in my research career after first training in physics; a certain inter-disciplinarity is no bad thing but the move has also left me feeling not quite at home in either field. I suspect in any case the sense of mastery would not have been such improved by sticking to one area of research; most scientists enclose only small plots of expertise and are wary of straying outside.

As a consequence, impostor syndrome looms large and brings with it moments of disabling dejection. Most days it’s OK. It helps that the phenomenon has a name and is widely shared. That knowledge, and the experience that comes with over a quarter of a century of scientific work and learning — failure punctuated by enough triumphs to keep the show on the road — go some way to providing the resilience to live with the never-ending feeling of never ending.

This should come as no great surprise since we cannot perceive the world except incompletely, though we may sometimes be taken in by the astonishing rate of scientific progress. As a scientist myself, I might be equipped with one of the most powerful modes of inquiry developed by humankind but am nevertheless all too aware of its limitations. Consider as an example the work involved in X-ray crystallography, the main technique that I use in my research to reveal the structure of protein molecules at a level of detail that is normally hidden from sight. It is the work of weeks or months or sometimes years to conjure crystals from our samples so that we might illuminate their innards with X-rays. The crystal splits the beam into myriad rays that are captured and recombined in the sophisticated, knowing embrace of mathematics to reveal the protein structure within. These long projects ultimately yield an object of dazzling complexity, a rich, three-dimensional representation of a molecule on a minuscule scale. Such power: we see essentially every part. The atomic construction is revealed and that enables us to understand the chemistry of its function.

And yet, seeing everything is not the whole story — one eventually learns not be taken in by the flush of excitement that accompanies a new result. The view remains partial, illusory, problematic in so many ways.

In spite of the power of the technique, we cannot see all of the structure. The image is an average of information summed over the billions of molecules in the crystal so that parts that are moving or that differ slightly in conformation between individual molecules remain blurred and invisible, like hands waving in a dimly-lit photograph.

Moreover, the method only reveals a single structure, the particular conformation locked into place by the inter-molecular forces that hold the crystal together. This captured state, an experimental convenience, is just one that the protein might adopt as its polypeptide chain flexes and writhes in its natural state in a living organism.

And that is not all. Crystallography may be powerful but it is also an exacting technique. Typically, only samples that are extremely pure are likely to yield crystals suitable for X-ray diffraction experiments, which means that you can often only examine one molecule or fragment at a time (though occasionally you might be able to crystallise two or three that cluster together naturally to see how they interact). The purifying isolation that is the secret of success in crystallography is therefore also one of its severest shortcomings, since it is rare to be able to examine all the interactions that proteins make while playing their part in the cellular dance of life. There is a kind of uncertainty principle in operation, above and beyond the quantum scale puzzled over by Heisenberg: the harder you try to pin something down for examination, the more the world in which it operates is expelled from the experiment.

And so, though replete with information, we are forced to realise that the big picture is mostly empty. We might have coloured in one isolated patch with our analysis but spent so long working out the shading, focusing on each atom as the glorious structure was gradually revealed, that only when we step back do we realise that the molecule that had loomed so large in our mind’s eye is but a speck on a gigantic canvas of white. Disappointment hacks into the sense of achievement. Sometimes the easiest escape is to move on, to distract yourself with the next experiment, the next structure.

Of course sometimes a story emerges, piece by piece, which is more satisfying; one experiment leads to a new insight, to another experiment or line of inquiry that connects parts of the world and makes sense of it in ways that could not be seen before.

But there are only so many stories, or in my case, molecules, than can be worked on in a lifetime. Like most scientists, I don’t have a grand achievement I can call my own, though I have had smaller moments of satisfaction. I think of work in the 1990’s that uncovered a role for the arcanely named 135S particle in the pathway of poliovirus infection; of a decade of experiments to probe how the serum albumin in our blood plays catch and release to shepherd all manner of molecules, from drugs to fats, around the body; or of our ongoing examination of the protease from foot-and-mouth disease virus that cuts and shapes viral proteins in each round of infection, and might yet be exploited in the fight against this alarming pathogen. Following the threads of these investigations from one clue to the next has certainly been gratifying but I have never had that sense of reaching a conclusion. Some of these stories have ended for me but only because I moved labs or ran out of funds or arrived at a question that was beyond the reach of current techniques, not because the investigation was complete.

The scientist’s lot is partial, incomplete. The answer is never the end and the puzzle at hand has to be enough. There is consolation in the notion that our published work confers some level of recognition and even perhaps a deceptive hint of immortality. My papers are doing reasonable business at the moment — people are reading and citing them — but most (all?) will eventually fade as the tides of time wash in new observations and ideas to divert attention elsewhere. Who knows when the stories that I joined for a time will be finished?

Cloudy sky and shining sea 2 (BW)

Of course, none of this is exclusive to my particular branch of science or even to scientists. A sense of incompleteness surely affects us all, to greater or lesser extents, though the noisy colour of life blinds us for much of the time to the empty spaces in our comprehension of this world.

How can we truly know where we are when our knowledge of the world is so incomplete? And what we are supposed to make of our lives? In the midst of so many unknowns, it is easy to lose your bearings; or rather, it can be difficult to find them in the first place. These are obvious problems of course — they are the very stuff of philosophy and religion and have been tackled by many finer minds throughout human history. I think of Socrates (’The unexamined life is not worth living.’), of Montaigne (‘Rejoice in the things that are present; all else is beyond thee.’), of Camus (‘One must imagine Sisyphys happy.’) and even of Pirsig’s musings on motorcycle maintenance. My own meandering reflections here were prompted by a review of Oppenheimer’s biography, which suggested that, for all his extraordinary learning and accomplishments, he ‘never figured himself out’. Has anyone?

Questions of meaning and purpose are obscured early in life by the certainties and norms passed on to us by parents and teachers and other cultural authorities. Through our early years we are immersed in ‘the ways of the world’ and will accept them until we learn to think for ourselves. Even then, the problem of incompleteness can easily be deferred by the optimism (or is it the arrogance?) of youth, which imagines itself to be only starting out on the learning curve of life and to have all the time in the world.

But I now know that I won’t meet all the people, read all the books or discover all the ways of thinking that I thought I would get around to in the span of a normal life. (That word ’normal’ — such a charlatan.) And I sense that even close examination will not succeed in all its hopes, for no experiment tells the whole story, however hard you try. How, therefore, to live in a world that we understand but little? Through force of habit and in the companionship of family and friends, I do well enough most days but then I open the paper and another obituary calls me to account.

Posted in Scientific Life | Tagged , , | 30 Comments

The Schekman Manoeuvre

This is the original version (with the original title) of an article that has been published at The Conversation

Having climbed all the way to the Nobel prize on a ladder made of Nature, Science and Cell papers, biologist Randy Schekman has turned around and declared that he is going to boycott these ‘luxury’ journals in future because of the way that they damage science.

When asked for an opinion about Schekman’s announcement few hours before it was published online, I replied that it would be easy to carp about the position he has adopted. And so it has proved to be.

Schekman has been branded as hypocritical since he owes much of his success to the journals he has denounced and because, in his tenure as editor at PNAS (Proceedings of the National Academy of Sciences), another relatively prestigious journal, he exhibited many of the foibles that he finds in the editors at Nature, Science and Cell; his clear conflict of interest as editor-in-chief at eLIfe, a new online journal that has deliberately set out to become a direct competitor of the top tier journals has not gone unnoticed; there have been hard questions too about the knock-on effects for the junior members in his lab, who have yet to make their way in a scientific culture that remains firmly in the thrall of publication in ‘luxury’ journals; he has been criticised for conflating too simplistically the issues of filtering (or selectivity) and the push for open access, which aims to make the research literature freely available to all; and some have pointed out that a boycott, even if it resulted in the closure of Nature, Science and Cell, would simply shift the problem somewhere else because of the competition inherent in a world peopled by egotistical scientists striving for finite resources.

Well, yes, yes, yes, yes and yes. All of these accusations have merit and demand good answers.

But the issue not quite as simple many of Schekman’s critics would have it, nor, to be fair, as Schekman laid it out in his brief piece in the Guardian.

Too few commenters — with a few notable exceptions such as Michael Eisen — have been willing to see Schekman’s announcement for what it is: a brilliantly orchestrated publicity stunt, timed for the week of his Nobel award when the full glare of the world’s media would be upon him. Nor have they been willing to give Schekman much credit for having thought the issue through beforehand. This isn’t the first time he has written about the problems due to journal prestige; and he has been putting his money where his mouth is, both by committing time to eLife (which, despite aspiring to be a top tier journal, has a declared policy of not promoting the impact factor metric) and by being an early signatory of the San Francisco Declaration on Research Assessment (DORA), which is aiming to neutralise the poisonous effect of impact factors on science and scientific careers.

Instead there has been the repeated assertion that it’s easy for a Nobelist to disregard the top tier journals. But if it’s so easy, why is Schekman the first to make such a stand? My guess is he anticipated all the brickbats that have been flung his way in the past week but had the nerve to go ahead anyway.

Nevertheless, he could do more and I expect that Schekman himself sees his recent declaration as simply a milestone on what has to be a long journey, given the abiding nature of the problem (which I have discussed in more detail elsewhere) .

He could certainly have written a much better article than the one that was published in the Guardian, so I hope he will take time to respond to critics and to lay out his argument more carefully. In particular, I would like to know how he talked through the move with his group and how he aims to mitigate the risks to their careers?

Philip Campbell, Editor of Nature issued a dignified response to the boycott announcement, pointing out his journal’s long-standing relationship with the scientific community and, rightly, that the obsession with luxury journals is largely a problem created and sustained by that community. Campbell was also correct to remind people that Nature has done a good job of drawing attention to the issue of impact factors in its editorials and reporting.

But I think journals such as Nature can do more. It is not sufficient to lay the problem at the feet of the research community when journals are part of that community. Or to shrug off advertisements of their impact factors when pitching for authors or readers as the work of the marketing department. I would like to see ‘luxury’ journals take a leaf out of another Nature-branded title, Nature Materials, which has in the past revealed the detail of the citation data behind the single-figure averages that are trumpeted each year when the new impact factors are published by Thomson-Reuters. That practice should become standard as it would help to demystify the allure of this quality proxy.

I would also like to understand the reasons why Nature, unlike Schekman, has not signed up to DORA, which represents a serious and determined move to bring in the culture change that Nature says it supports. Campbell has only said of DORA that “the draft statement contained many specific elements, some of which were too sweeping for me or my colleagues to sign up to”. It would be helpful to know more about Nature’s objections, to explore ways around them.

Of course, ultimately, it is the research community that has to act. Schekman has made a bold manoeuvre that has stirred up the issue afresh, and for that I tip my hat to him. But I hope he’s not now done with this.



The Conversation

Posted in Open Access, Scientific Life | Tagged , , , , , | 3 Comments

Why Elsevier is completely in the right… and totally wrong

The internet was all aflutter last week because Elsevier has sent thousands of take-down notices to, a social networking site where many researchers post and share their published papers. This marks a significant change of tack for Elsevier. Previously the publisher had only been sending a handful of DMCAs a week to (the notices are named after the US Digital Millennium Copyright Act), but now it appears they have decided to get tough.

There was the predictable outrage at the manoeuvre though, as several commentators acknowledged, Elsevier is acting entirely legally. It is simply enforcing rights that were handed to it — for no compensation — by the authors who have now been affected by the takedown demands.

The company is behaving rationally. Why wouldn’t they take all reasonable steps to protect their business?

The problem, and it is a fundamental one for legacy publishers as a whole, is that what seems reasonable in this market is changing. Elsevier and other companies who cleave to the subscription model of academic publishing are slowly being overwhelmed by the tide of events. They may have won a temporary victory in asserting their rights but the almost wholly negative reaction to the move suggests that they have scored another PR own goal. One of the affected authors, Guy Leonard, has made it clear that he will do what he can to avoid Elsevier in future. And although I actually have some sympathy with many who work for the company because, despite its size, it is struggling to with forces that are even bigger, I too have resolved — perhaps belatedly in some eyes — to sign the Elsevier boycott, putting them on public notice that I will refrain from refereeing and editorial work*.

There is a sense that the company knows the ground is shifting beneath its feet. Its response to the adverse reaction among scholars lacked conviction. Tom Reller, Elsevier’s VP for global corporate relations explained the move in terms of protecting the discoverability of the papers they publish and the credit that should accrue to authors. He told the Chronicle of Higher Education in an email:

“We aim to ensure that the final published version of an article is readily discoverable and citable via the journal itself in order to maximize the usage metrics and credit for our authors, and to protect the quality and integrity of the scientific record. The formal publications on our platforms also give researchers better tools and links, for example to data.”

None of these claims stands much scrutiny. Usage statistics could easily be accumulated for views of a paper on; separate hosting of the paper might even increase visibility and so, eventually, citation credit for authors; there is no threat to the integrity of the scientific record (especially if the final published version is uploaded) and, if authors or publishers are producing papers that don’t contain clear links to the underlying data, then neither is doing their job properly.

At the end of the day, the motive seems to me to be one of profit protection — a completely understandable one for a commercial concern but not one that is in the wider public interest. I know of no hard evidence the sharing of papers on has led to the cancellation of journal subscriptions, though I imagine that is the long term fear of publishers: if repositories get too good, Elsevier and co. will lose income. Hence their ongoing demands for long embargoes for papers uploaded by authors to institutional repositories.

There will in future be good money for those companies that can provide a quality publishing service but the pickings are not likely to be as rich as at present. That may be a disturbing prospect for some publishing companies but it is good news for the largely publicly-funded research sector.

Change is coming. And just how far and wide those changes are likely to be was made clear to me when I attended the Berlin Open Access meeting in late November. I was not able to go to the Satellite conference for early-career researchers the day before the main meeting — the existence of which is testament to the appeal of open access to a whole new generation — but was nonetheless impressed and inspired by the commitment and innovation on show from students, scholars, librarians, publishers and even politicians.

Brandenberg Gate, Berlin

The meeting kicked off with the politicos. David Willetts, the UK minister for universities and science, was delayed (see below for his contribution) but we had presentations from George Schütte, from the German ministry for education and research, and Roger Genet, Director General for Research and Innovation at the French Ministry of Higher Education and Research, who both spoke of the importance of making progress on open access (OA), while acknowledging that there is still some way to travel. Schütte was hopeful that there would be a commitment to further policy developments in OA in the coalition treaty that is presently being hammered out following the elections earlier in the Autumn. One has to be careful about speeches made by politicians — they are often too ready to tell audiences what they want to hear — but it was surely significant that the governments of France, Germany and the UK sent representatives to the meeting.

A broader international perspective was given by Carl-Christian Buhr of the European Commission, relaying the words of Vice-President Nellie Kroes (who was indisposed by a foot injury). He emphasised the inherently open nature of science and the need to do justice for the taxpayer, a sentiment that has also motivated UK policy. Buhr was also at pains to point out the necessary internationalism of open access. He argued that that this makes the project more efficient and drew a useful parallel to the cooperation needed between nations to share the costs of expensive research facilities, such as CERN’s Large Hadron Collider.

Taxpayers were also mentioned by Heather Joseph of SPARC who presented a useful review of policy developments in the US, from the rapid mutation of an initially voluntary OA policy at the National Institutes of Health to a mandatory one, which has greatly improved compliance. She was gratified by the language used by John Holdren, President Obama’s senior scientific adviser at the time of the announcement of the White House directive to extend the NIH policy to other federal research agencies: “citizens deserve easy access to the results of research their tax dollars have paid for.” The battle in the US is by no means over; embargos are still in place and the directive leaves room for some agencies to argue the case for delays to open access publication that are even longer than the 12 months permitted by the NIH. The 12-month period was seen initially as a temporary compromise needed to get the policy in place but moves to shorten it have yet to bear any significant fruit.

Technology writer Glyn Moody then took the stage to give a rousing and informative presentation titled “Half a Revolution”. He has summarised it himself (and uploaded his slides) and I urge you to take a look. Glyn presented  a pithy overview of various strands of the open source and open access movements but made telling points about the worldwide success of linux — an operating system that was developed for free and now runs most of the world’s supercomputers and mobile phones, and has led to the creation of numerous profitable companies. A particularly insightful example for any publisher wishing to take note was his mention of Red Hat, a billion-dollar company that packages and sells software that is available on the internet for free. That’s not the only thing that Red Hat does, of course — it’s a bit more complicated that that, to quote Ben Goldacre — but it is clearly possible to do good business selling stuff that is free; the trick is to think through the service offering.

The next speaker, Ulrich Pöschl, a chemist at the Max Planck Institute, had clearly thought about the service offering when setting up a new open access journal, Atmospheric Chemistry and Physics (ACP). Dismissing the noise generated by Bohannon’s investigation into peer review at predatory open access journals as ‘a side issue’, Pöschl outlined how his journal has established a reputation for rigour by pioneering model of multi-stage open peer review that just works. You can read the details for yourself in his recent review article in Frontiers but the key is that the openness and integrity of the process has created a virtuous circle that has enhanced the quality of the submitted manuscripts and the reviews. ACP is an open access journal with rejection rates of only 15% but is one of the top-ranked journals in its field. Its article processing charges (APCs) are reasonably competitive at around 1000 Euro but the journal still makes a surplus healthy enough to allow it to offer waivers to authors without funding for OA publication. The real value in this innovation is that it demonstrates how open access can be made to work; in particular, it lights a way that learned societies might want to seriously consider.

Innovation of another sort from the university sector was in evidence in the talk from Bernard Rentier, rector of the University of Liège. On assuming the role he was surprised to find that the university had no way of recording its outputs. He was also determined to increase the visibility and use made of the research done in Liège.

So he set about creating a repository in which all staff — with no exceptions — must deposit their published works. He got around the problem of weasel words from publishers such as Elsevier, who permit authors to freely upload to repositories as long as they are not mandated to do so (!) (a provision that his legal team tells him has no basis in law), by not having a mandate. Instead, staff have been told that only publications in the repository will be considered when they apply for promotion. The success of this approach has been remarkable, at least on the local level. Rentier has won over his staff, not just with the stick of a threat to promotion opportunities, but also by making sure the repository provided plenty of technical support for staff and by assiduous communication with them about the benefits of the policy. His staff can see that for themselves now, and are often pleasantly surprised to discover to just how many times their work is being accessed from the Liège repository (where, naturally, you can also access Rentier’s presentation).


Curiously, Rentier presented data to show that deposited work that was subject to an embargo was accessed far less often — about 20 to 30 times less often — after the embargo had been lifted when compared to papers subject to no delay in access. This was a striking demonstration of the harmful effects of embargos and will no doubt be weighing on the minds of researchers at Liège. Disappointingly and somewhat bafflingly, Rentier’s bold vision has yet to take hold at other Belgian universities, which have implemented watered-down versions of his scheme.

To round off the first day, Mike Taylor took the stage to deliver an ideological cri de coeur for open access. I’ll leave it to Mike to give you the details but the take home message is that open access is not just about the money, though one shouldn’t forget about the opportunity costs of toll access. Most memorably for me, he quoted Cameron Neylon’s key insight that “the internet doesn’t just change how well we can do things, it qualitatively changes what we can do.”

On the second day of the meeting the perspectives broadened out to encompass viewpoints from all over the world. We heard from Sely Costa about scholarly publishing in Brazil, which has a much stronger tradition of university-based journals and was the originator of the successful Scientific Electronic Library Online (SciELO); from Martin Dracos at OpenEdition, a publishing innovation that aims to promote open access publishing of books; from Robert Darnton of Harvard, who told us about the Digital Public Library of America which is making the collections stored America’s libraries, archives and museums freely available to the world; from medical student Daniel Mutonga who is taking the open access initiative in Kenya, where lack of access is as much a problem for education as for research; and from Xiaolin Zhang of the Chinese Academy of Sciences who told us that even premier Wen Jibao gets open access, declaring in 2012 that (to paraphrase) publicly-funded science should be openly accessible to the whole of society.

This last presentation was perhaps the most significant, given the rapid growth of Chinese science, and particularly since Zhang went out of his way to emphasise the extent of Chinese engagement in open access, telling us that China aims “to contribute to the open sharing of research results worldwide, not taking a free ride”. Of course, actions speak louder than words but Zhang, who freely acknowledged that there were still some in his government and research community who had to be won over, was impressively candid.


At the end of that morning session David Willetts finally arrived and began by graciously thanking his hosts in their own language. Switching to English he talked up the UK gold-preferring policy and raised not a few eyebrows in the room by recommending the Publishers’ Association Decision Tree as a useful tool for guiding researchers through the complexity of open access publishing choices. However, he did also acknowledge, albeit implicitly rather than explicitly, that the Finch review (Finch II) had conceded the point on green OA being a useful alternative to gold publishing routes. His most interesting comments, as far as I was concerned, related to the role of hybrid OA (payment of APCs to make articles OA in journals that still charge a subscription). Willetts went so far as to say that users could “decline to pay” APCs for hybrid if they felt that double-dipping was occurring. Could it be that he has taken on board some of the criticisms made in the House of Commons select committee report on open access, which echoed the view that hybrid OA is increasingly seen as a failed experiment? If publishers want to convince the community that hybrid works, they really have to show us the benefit — specifically how it might function as a mechanism to fund conversion of journals from the subscription model to being fully OA; so far, they have singularly failed to do so. This is an issue that I think needs to be a focal point if the RCUK policy review in 2014. In any case, I am sure the import of Willets’ words was not lost on librarians in the room.

In questions, WIlletts earned himself further credit with the audience by declaring Bernard Rentier’s innovations at Liège to be “ingenious”.

And there I will have to leave you. I am skipping over the final few presentations I am afraid from the ERC’s Nicholas Canny, Manfred Laubichler, Nick Shockey (and his OA button friends, David Carroll and Joseph McArthur, whom I have written about in the Guardian) and John Willinsky; not to mention the final discussion of all things open between Robert Schlögl, PLOS’s Cameron Neylon and  the Max Planck’s Jürgen Renn. By this stage I was reaching saturation. I shouldn’t complain; it’s just that the fare had been so rich up to that point that I struggled to absorb any more.

It was a exciting meeting, in many ways inspirational. Ideas and ideals coming from all corners of the globe united in the common goal of changing the way we share our research so that all might benefit. It is this changing world that some old school publishers are struggling to adapt to, not realising that take-down notices are turning into relics, like toll-booths left derelict when the new highway came through.

Yes, the meeting might have been something of a gathering of the faithful but if we don’t believe in open access then it will never happen and it was invigorating to hear of so many developments. Of course, the journey is far from over. But if you think that the changes in train for scholarly publishing are too ambitious, or too careless of the practicalities and problems, or that they over-estimate the capacity of humans to cooperate for a good that is beyond their immediate self-interest, then please reflect on this: the evening reception at the end of the first day took place at the Bode Museum in the heart of Berlin and featured an invited lecture by Haim Gertner about Yad Vashem, an Israeli project to digitise and make available the photographs and mementos of all those killed by the Nazis in the Holocaust, to remind us of the horror and to honour their memory by ensuring, as he put it in the title of the talk, that ‘Their Place in History is in the Future’. The things that people can do when they work together are deeply, deeply impressive.


*I cannot commit to refraining from publishing in Elsevier journals since that decision has to be made in consultation with group members and collaborators, though I can always argue the case for alternatives. 


Posted in Open Access | Tagged , , , | 3 Comments

Things to know about policy, science and the public

There has been a flurry of articles of late listing important things that scientists, politicians and the public should know about each other. I am logging them here because I enjoyed each of the pieces and think it likely that I will want to consult them in future.

First to appear was the piece by William Sutherland, David Spiegelhalter and Mark Burgman that was published in Nature on 20th November — Policy: Twenty tips for interpreting scientific claims. Their list mutated mysteriously into the Top 20 things politicians need to know about science when reported in the Guardian.

In reply a couple of weeks later Chris Tyler, the director of the Parliamentary Office of Science and Technology and someone deeply involved with policy makers, listed his Top 20 things scientists need to know about policy-making.

That wasn’t the end of it. Just two days after Tyler, Roland Jackson of Sciencewise, a body devoted to fostering public discourse about science policy, sought to remind both scientists and policy makers about the general public by itemising 12 things policy-makers and scientists should know about the public.

And that is the end for now. I did think momentarily of summarising the contents of these articles using a carefully constructed Venn diagram but then recovered my senses. Nevertheless I recommend each of them as a worthwhile read for anyone interested in the intersection of science, people and policy.

Posted in Science & Politics | Tagged , , , , | 1 Comment

The very interesting web of connections

The Royal Institution has made a rather lovely film about William and Lawrence Bragg, the father and son Nobel laureates who came up the method of structural analysis by X-ray crystallography around 100 years ago. The film is constructed around an interview with Lawrence Bragg’s daughter Patience, a delightful lady who has very fond memories of her father and some wonderful stories about him.

I’m in there too, talking about the nuts and bolts of crystallography. You will see that I am sitting beside a model of lysozyme, one of the first protein molecules to be analysed by crystallography, a feat performed under the watchful eye of Lawrence Bragg at the Royal Institution in the latter years of his life. It’s nice to be sharing the screen with Patience, who has such a deep personal connection to the founders of my field. Unfortunately Patience and I didn’t have any scenes together but I was very glad to meet her and make a connection when she attended my lecture on crystallography at the RI last month.

It is odd to think that I would never have been involved in that film or given that lecture if I hadn’t started blogging about science just over five years ago. Through blogging I have made many interesting and unexpected connections with people and ideas. Meeting Lawrence Bragg’s daughter in October was just the latest in a long line.

Patience ThomsonPatience. Still from the RI film.

Until this week, that is, because another fascinating connection has snapped together on Tuesday with the publication of a paper on a new method in crystallography. I’ve already written about this paper and was keen to do so because it reports such a clever technical development, but I have realised that my account missed a trick.

Standard X-ray crystallography, as pioneered by the Braggs, relies on the ability of crystals to scatter or diffract X-rays. Diffraction is a quintessentially wave phenomenon; as a beam of X-rays passes through the crystal, every atom within is stimulated to radiate waves of X-rays in all directions. But re-radiated X-ray waves only emerge from the crystals in very distinct directions, giving rise to a pattern of diffraction spots that can be interpreted to figure out the molecular structure within the crystal. The spotted form of diffraction is due to the particular ways that the waves of X-rays radiated from each atom in the structure interfere — or add up — with one another as their crests and troughs criss-cross through space and time.

The new paper by Gonen and colleagues uses beams of electrons instead of X-rays to do crystallography, which at first sight seems an odd switch to make since electrons are particles, not waves. Their particulate nature was shown by their discoverer, JJ Thomson, an achievement for which he was awarded the Nobel prize for physics in 1906. But in a nice twist of family history, Thomson’s son George subsequently found that beams of electrons can also behave as waves and showed this elegantly by passing a beam of electrons through gold crystals (in gold foil) and recording the diffraction pattern. This was one of the powerful demonstrations of wave-particle duality that is at the heart of quantum mechanics. In his turn, George was awarded the Nobel prize for physics in 1937.

The marriage of the work of the Thomsons and the Braggs comes to new fruition in the work reported this week which has shown that electron diffraction can be now also be used to determine the structures of protein molecules from the tiniest of crystals. Rather nicely, Gonen’s team determined the structure of lysozyme again to show off the power of the method. I am sure that this new result will delight Bragg’s daughter, Patience; doubly so because she is married to David, the son of George Thomson.


Posted in History of Science, Protein Crystallography | Tagged , , , | Comments Off

Open Access Headaches

Tense, nervous headache? Feelings of confusion? Mood swings from warm optimism to a gnawing sense of futility? You’ve been reading about open access again, haven’t you? I know because I have and I recognise the symptoms. 

Open access week came and went in the latter part of October and brought with it a plethora of events, publications and blogposts. The worldwide verbiage on this topic increased once again. I hope that some new people might have gained an introduction to open access — that seemed to be the case at the event I attended at Queen’s University in Belfast — but the burst of activity also carries risks because more words on the topic don’t necessarily lead to greater understanding or engagement. 

I came across this myself in the past week when my attention was directed to a post at the blog hosted by the journal Cell Reports. The blogpost sought to clarify the open access choices for Cell Reports’ authors, in particular their licensing options. There are two — the less restrictive CC-BY which allows “others distribute, remix, tweak, and build upon your work, even commercially, as long as they credit you for the original creation” or the more restrictive CC-BY-NC-ND, which prevents commercial re-use (NC = non-commercial) or any modification of the published work (ND = No derivatives). 

The post goes on to note that, given these choices, most authors opt for the more restrictive licence. Nature Publishing Group has reported similar findings, suggesting that open access still has to win some hearts and minds. The comment thread is illuminating on this point and worth reading. 

But what particularly confused me in the Cell Reports blog post was the claim that “even that most restrictive license grants full open access”. This struck me as odd. It is decidedly out of kilter with the definition of open access enunciated in the 2003 Berlin declaration (my emphasis in bold):

“a free, irrevocable, worldwide, right of access to, and a license to copy, use, distribute, transmit and display the work publicly and to make and distribute derivative works, in any digital medium for any responsible purpose, subject to proper attribution of authorship.”

That said, the phrasing of the Berlin declaration is clearer and more robust than the original Budapest statement from 2002:

By “open access” to this literature, we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.

The idea of authors retaining control “over the integrity of their work” could be mis-construed as preventing the creation of derivative works but I understand (from Mark Patterson at eLife) that it is more to do with upholding the original meaning or intent of the work. That interpretation becomes clearer in the text of the BOAI 10 recommendations published in 2012 on the 10th anniversary of the Budapest declaration. With regard to licensing and re-use. Section 2.1 states:

We recommend CC-BY or an equivalent license as the optimal license for the publication, distribution, use, and reuse of scholarly work.

The BOAI 10 text explicitly recognises that some variants of open access are more open than others but recommends a pragmatic, can-do approach:

In developing strategy and setting priorities, we recognize that gratis access is better than priced access, libre access is better than gratis access, and libre under CC-BY or the equivalent is better than libre under more restrictive open licenses. We should achieve what we can when we can. We should not delay achieving gratis in order to achieve libre, and we should not stop with gratis when we can achieve libre.

In the light of these published statements, Cell Reports’ assertion that a CC-BY-NC-ND license corresponds with ‘full open access’ is, unfortunately, likely only to confuse authors and to add to the feeling that open access is just too complicated to bother with. I can’t quite bring myself to censure the journal for this confusion since it took me some effort to clarify the matter in my own mind. You would think after all I have written on this topic that I would already be familiar with the details of OA and CC licenses. But as I keep saying, it’s complicated. 

Fortunately, it does’t have to be this way. In my quest for clarification on the licensing issue, Mark Patterson kindly pointed me to a web-site created by Peter Suber that provides an authoritative overview on all the key issues surrounding open access. This will be my remedy in future if I every suffer further bouts of confusion and I recommend it to you also.  



Posted in Open Access | Tagged , , , , | 6 Comments

Impact factors are clouding our judgement

Nature has an interesting news feature this week on impact factors. Eugenie Samuel Reich’s article — part of a special supplement covering various aspects of the rather ill-defined notion of impact — explores whether publication in journals such as Nature or Science is a game-changer for scientific careers.

The widely-held assumption is that they are. And the stories from young scientists interviewed by Reich*, who had almost all published in Nature or Science or Cell back in 2010, would appear to confirm that. Their papers in prestige journals won jobs or grants or opened doors to clinical trials that has previously been shut. Or at least that’s what they assume or believe or feel; no-one can quite be sure because the rules of the game are unspoken and unwritten.

Nature Cover - Oct 17, 2013


But the trouble is that these unofficial rules appear nigh on universal. I was certainly mindful of them when I embarked on my research career more than twenty years ago (as I mentioned in my contribution to this week’s Nature Podcast). Regular visitors to this blog will be aware that since then I have modified my views and now see the excessive influence of impact factors as a kind of addiction for which the scientific community needs to find a cure.

It can be a hard argument to pitch because the culture of dependence is so embedded. The lure of high-impact journals is strong and underpinned by some rational motivations. As noted by Finnish scientist Annele Virtanen, one of  Reich’s interviewees, the competition for publication in Nature of Science acts as a spur for scientists to be ambitious in their research. No bad thing of course but the trouble sets in when rewards are tied too closely to the particular achievement of a Nature or Science paper.

While it is certainly true that many of the papers published in these journals are of very high quality — and that on average they garner more citations than papers in journals focused on particular disciplines — it is too often forgotten that the impact factor not only disguises the very real variation in performance of papers in any one title but flatters them because its dubious method of calculation skews the measure of average performance significantly towards the higher end of the distribution.

The scientific community too often overlooks the granularity of the data and thinks only in terms of impact factors. Thus everyone who gains entry to Nature or Science wins an impact factor prize — irrespective of the actual significance of their work; and of course, those who narrowly fail to make the grade (in an assessment process that is highly stochastic) lose out. Virtanen, the recent beneficiary of this system, says “I can’t see so many bad sides” — a common enough view. But would she prefer to trust her future prospects to the uncertainty of getting her next paper through the narrow doors of the very top journals or would it be better to be able to rely on a system that does a more rigorous and fairer job of assessing what she has actually done?

There are positive moves in this direction. Sandra Schmid, head of cell biology at the University of Texas South-western Medical School, has recently taken steps to move away from the over-reliance on impact factors in her hiring procedures. I am pushing for similar revisions at my own institution. These moves chime with the recent San Francisco Declaration on Research Assessment which hopes to encourage all stakeholders in the scientific process — universities, funders, publishers and learned societies — to revise and improve their methods of assessment, specifically to eliminate the unhealthy lure of the impact factor. In the UK, the Wellcome Trust has long had a stated policy of not considering where applicants’ work is published when judging grant proposals, a policy now adopted by the UK Research Councils. But it is one thing to formulate a policy; quite another to make it work.

In the editorial accompanying this week’s supplement on impact Nature restates its long-standing opposition to the mis-use of impact factors in judging individuals or individual papers (a position that has been usefully repeated in other Nature-branded titles such as Nature Chemistry and Nature Materials). This is laudable but insufficient. It sits uneasily with the full-page adverts that appear annually with the announcement of yet another incremental rise in impact factor. This recent one, trumpeting Nature’s 2011 IF of 36 is accompanied by the strap-line ‘Energize your scientific career…’. What are researchers to make of that if not a repetition of the unspoken mantra that publication in top-tier journals is vital to real success in science?

The editorial also repeats the line that the obsession with impact factors is a problem for the scientific community to address. And that is true — it is a largely self-inflicted problem and it is primarily our responsibility to sort it out. But I find it odd that Nature appears to see itself as apart from that community, especially when its editors and reviewers are drawn from within it. I don’t think the dividing line is so easy to discern — especially given the supportive commentary of some Nature journal editors.

Where I do agree strongly with the editorial line is in the declared need for “for research evaluators to be explicit about the methods they use to measure impact.” In this, Nature, and indeed all scholarly journals can help — and at negligible cost. I call for them to publish all the data on which their impact factor calculations are based. Every year, when the new impact factor is released and advertised, please also publish the citation numbers and distributions on which it is based. This transparency will help to demystify the magical lure of that one number by revealing a truer picture of the performance of all the papers that contribute to it. It will reveal the variation in granular detail — the big hitters and the damp squibs. Comparison between journals would be enriched because the real overlap in the citation distributions — too often forgotten in the obsession with just one number — would be made evident.

PLOS is already leading the way in making this type of information available. Nature could do a tangible and valuable service to the scientific community by a simple act of transparency. It could blow away some of the clouds that are presently obscuring our judgement.

Now that I come to think of this proposal, I can’t see any bad sides.

Update (20-10-2013, 15:50) — If I’d had the time I would have read all of the articles in Nature‘s supplement before publishing this post and made a point of including a link to the piece by David Shotton on efforts to make citation data open.

*Update (22-10-2013, 15:33) — The original version of this post referred to the Nature author as Eugenie Samuel Reich, which is her full name; but she kindly informs me that her surname is simply Reich. The text had been modified to reflect this. Apologies for any confusion.


Posted in Open Access, Scientific Life | Tagged , | 27 Comments

Parliamentary committee slams UK policy on open access

The UK House of Commons has its dander up. Having bloodied the prime minister over Syria in the past fortnight, the select committee of MPs that oversees the work of the Department of Business, Innovation and Skills (BIS) has issued a report that is heavily critical of the government’s policy on open access (OA).

The report was published early this morning so I have had time only to skim through the conclusions and recommendations, but it makes for pretty stunning reading. Although the committee pauses briefly at the beginning to laud the government’s proactivity on open access, it proceeds to take issue with almost every plank of the policy put in place by RCUK over the past 12 months (following publication of the Finch Report) and calls for radical revisions.

The committee traces the central problem to the fact that the Finch Report downplayed the importance of subject and institutional repositories as avenues for open access — so-called green OA. This led RCUK to put in place a policy that favoured gold OA, even though it was acknowledged to incur excess costs to research budgets in the transition away from subscription-based scholarly publishing. The committee recognises that a publication system based on purely open access journals — operating in a functional and transparent market — is a broadly agreed ultimate goal of policy in the UK and elsewhere, but criticises the government for plotting a route to that future that is excessively expensive and out of line with developments in most other parts of the world.

Accordingly the committee’s report makes the following recommendations on repositories:

  • That the government build on existing investment on the UK repository infrastructure, specifically “to promote standardisation and compliance across subject and institutional repositories “to enhance their utility as outlets for open access
  • That HEFCE should convert its current proposals to require immediate deposit in institutional repositories as a requirement for future REF eligibility into firm instructions
  • That RCUK should follow HEFCE’s lead by “reinstating and strengthening the immediate deposit mandate in its original policy”

‘Immediate deposit’ is not necessarily the same thing as immediate open access via a repository — the committee allow for embargo periods. However, it is critical of RCUK for permitting embargo periods to lengthen to 12 months and 24 months respectively for sciences and humanities research. Strangely, this was an alteration made in the aftermath of the investigation into open access by the House of Lords back in January — a concession, I guess, to the complaints of publishers and concerns expressed by some humanities scholars. But the House of Commons committee rightly notes “the absence of evidence that short embargo periods harm subscription publishers” and that the RCUK’s move has perversely degraded, rather than enhanced access to the research literature.

In a further boost to green OA, the committee also asks RCUK to clarify its policy guidance once again. Although RCUK has already refined its original formulation to indicate that, while gold OA is preferred, authors and institutions can choose green OA, the committee is critical of the Publisher’s Association decision tree that was incorporated into the RCUK guidelines published in back in March. It rightly notes that this tree de-emphasises the green OA option and is liable to confuse authors. I would not be sorry to see it disappear.

The shift in policy focus to green OA is partly driven by the committee’s concerns with excess costs to research and university budgets at a time of fiscal strain. I get the sense that they MPs have paid close attention to the 2012 analysis published by Swan and Houghton, which made it clear that although gold OA should give better value for money in the long run, the cheaper route to a fully gold OA scholarly publishing system was by mandating green OA.

The committee articulates concerns on several points where it feels that insufficient care has been taken to ensure that the taxpayer is getting value for money for its investment in research and scholarly publication. It is worried that RCUK policy was built on rather generous assumptions about the necessary level of Article Processing Charges (APCs) that are often paid to journals for immediate open access. But that is not all, Several other recommendations address concerns about value for money. These include:

  • Amending the policy so that “APCs are only paid to publishers of pure Gold rather than hybrid journals”; the aim here is to “eliminate the risk of double dipping by journals, and encourage innovation in the scholarly publishing market.” This reflects an increasingly widespread view — that I share — that hybrid open access is simply not working (see the analyses voiced by Richard Poydner’s recent interviewees).
  • Ensuring that authors are made sensitive to price when choosing a journal and whether or not to pay an APC. Currently universities have been allocated block grants from RCUK to cover these expenses but there are concerns that mechanisms will not be put in place to ensure that researchers are directly exposed to APC costs. The committee therefore recommends that funds for APCs are instead paid as part of research grants, effectively reverting to the pre-2012 system. (The intention here is admirable and necessary but the remedy is not workable in my view. The committee’s recommendation overlooks the problem with the previous system, which is that grants are time-limited and so would not cover APC costs for the common occurrence of publications arising after the end of the grant.)
  • Urging that the government work to persuade its partners in the EU to reduce the rate of VAT paid on e-journals. This is needed to equalise competition with printed journals, which do not attract VAT, and so foster the development of new open access publications that aim to exploit online-only publication to reduce costs.
  • Urging the government to secure the elimination of non-disclosure clauses in publishing contracts between publishers and universities libraries. The committee suggests that if agreement on this cannot be achieved voluntarily with publishers, the matter should be referred to the Competition Commission!

Towards the end the committee’s report tackles the issue of licensing. Current RCUK policy demands liberal CC-BY licences for research articles if an APC is paid and the equivalent of a CC-BY-NC licence* if not. The committee would like to see more author choice in this matter, taking the view that “the use of a particular licence should not be prioritised over immediate online access to findings of publicly funded research”. This probably plays to some authors nervousness about the more open Creative Commons licences, but the implications for text mining of research articles are not explored (at least in the report’s conclusions).

And there you have it. Or a rapid digest at any rate; I haven’t covered every point but recommend that you scan through the principal recommendations themselves — they are succinct and accessible.

The report doesn’t have real legislative teeth — the committee has made recommendations, not rules. Nevertheless, it is an important document, one that changes the mood music and that the major UK stakeholders in open access cannot ignore. It represents a reassertion of the rights of the citizen and the taxpayer. Perhaps most significantly of all it is an attempt to get the UK back into step with open access policy developments in the US and the EU, which is certainly to be welcomed. The bold dash for gold that the government thought might inspire other nations and accelerate the transition to an open access system of publishing has stalled (a point I made in my submission to the committee) and it is time to recognise that our interests are best served by working together on a green route to the gold future.

The report will no doubt make for sobering reading for publishers, BIS, the Finch Working Group and RCUK. It will be interesting to see how they respond. The latter two groups are charged with incorporating the report and its evidence in their upcoming reviews — the Finch group is scheduled to meet later this month while the first stage of the RCUK’s review of its open access policy is due to take place next year.

So off we go again on the open access merry-go-round. Power to the people.


*Update 10-Sept-2013, 14:12: Correction to the original text which asserted incorrectly that a CC-By-NC licence was required by RCUK for those papers made available via a repository. Thanks to Dan S (see comment below) for pointing this out. The relevant wording of the RCUK guidelines (PDF) is (with my emphasis):

Where Open Access is achieved through deposit of the final Accepted Manuscript in a repository (the ‘Green’ route)… the Research Councils would like research papers to be made available using the most liberal and enabling licences, ideally CC BY. However, the RCUK policy requires only that the manuscript is made available without restriction on non-commercial re-use. The policy does not specify a particular licence, and the requirement can be met by use of the Creative Commons Attribution-non-commercial licence (CC BY NC).


Posted in Open Access | Tagged , , | 20 Comments


This post has nothing to do with science. Seamus Heany is dead.

I am only begining to process what that means to me. I claim no deep knowledge of his poetry but it has been with me for a long time. I studied his work at school in the late 1970s; I have a few of his books of poems and prose on my shelves; I saw The Cure at Troy at the Tricycle theatre in 1991; I heard him speak once — when I was a postdoc in Boston in 1994.

Several of his poems remain with me. Follower is one I particularly remember. You can hear me read it below (though I do not have Yeats’ talent for declamation). Now that I am a son and a father, the kick at the end is all the more haunting.


Posted in Science & Art | 1 Comment