As part of its celebrations to mark the 350th anniversary of the publication of Philosophical Transactions, the world’s longest-running scientific journal, the Royal Society has organised a conference to examine ‘The Future of Scholarly Scientific Communication’. The first half of the meeting, held over two days last week, sought to identify the key issues in the current landscape of scholarly communication and then focused on ways to improve peer review. In the second half, which will be held on 5-6th May, the participants will examine the problem of reproducibility before turning attention to the future of the journal article and the economics of publishing.
The luminaries who assembled in the Royal Society’s commodious headquarters in Carlton House Terrace included publishers, academics, representatives from funding agencies and learned societies, several journalists, and a smattering of FRS’s and Nobel laureates – all well versed in the matter at hand. A report of the deliberations will be published in due course but I wanted to work through a central problem that surfaced repeatedly in the meeting last week.
The Royal Society’s portrait of Tim-Berners-Lee, without whom we wouldn’t be having this conversation.
The central problem
The discussion on the first day – vividly live-blogged by Mike Taylor – was an attempt to define the challenges facing scholarly publishing in the 21st century and covered territory that will be familiar to anyone who has read up on open access. The debate kept circling back to the same basic point: the over-weening influence of impact factors and prestige journals, which have academics and publishers locked in an unhealthy relationship that seems increasingly unable to harness the opportunities presented by a digital world. It turns out that pretty much everyone can articulate the present difficulties – the hard bit is finding workable solutions.
As I sat at the back of the room absorbing the proceedings, it struck me that the crux of the problem could be distilled into two laws of publishing. During the meeting I put them out on Twitter in a somewhat light-hearted way but there’s a serious point here.
First law: To get maximum credit for your work, publish in a journal with a high impact factor.
Second law: For maximum speed of publication, choose a well-run mega-journal with ‘objective’ peer review* or, better yet, a preprint repository.
The problem with these laws is that they are incompatible and have given rise to unintended consequences, the central one being the creation of a system in which incentives for researchers are lashed to a slow and unwieldy process of publication. As presently configured academic publication is a competitive business in which journals vie with one another for the best quality science, or at least for the papers they think will boost their impact factors since that is one of the best ways to build a brand. As a result they have become the locus of the competition between researchers, who are now obsessed with the prestige points awarded by journals as the means to win jobs, promotion or funding. The ensuing chase for publication in ‘top journals’ retards publication because researchers are willing to submit to multiple rounds of submission, rejection and re-submission as they work their way down the hierarchy of titles.
There are other adverse effects (which will be considered below) but the solution to the problem has to be to weaken the coupling between the location of publication and the assessment of research and researchers.
That is easier said than done. The coupling cannot be broken in a free market because of the competition between journals, which is of course fuelled by the competition between researchers. The competitiveness inherent in the system is not necessarily a bad thing since it can act as a spur for scientists to do their best work. The problem is that publication in a particular journal is too easily seen as the final judgement on any given paper. If it’s in Nature (to choose a prominent example), it must be world-class – such is the argument that I have heard repeatedly in common rooms, grant panels and hiring committees. This line of reasoning is seductive because there is some truth in it – Nature publishes a lot of great science. Unfortunately for too many people the argument contains enough truth for them to ignore the fine detail and from there it is a short step to take the journal impact factor as a proxy for the quality of a given piece of research. But it is vital to tease out and neutralise all the problematic aspects of our over-reliance on journal prestige. There are big wins to be had for scholarly publication and for researcher evaluation if we succeed.
The simplistic but widespread view that good journals publish good papers is some distance from being the whole truth of the matter. Some papers get into Nature or Science because they catch a trendy wave but then fail the test of time – arsenic life, anyone? That’s not to shout down journal predilections for trendy topics, which seem to be inevitable amid the ebb and flow of scientific tides but we should be mindful that they occur and calibrate our expectations accordingly. Some papers are published in top journals because they come from a lab at a prestigious university or one that has a good track record of publication and so may benefit from a type of hindsight bias among editors and reviewers**. Some get in because exaggerated claims or errors or fraud are not picked up in review. Again, that’s not necessarily the fault of editors or reviewers – especially in cases of fraud – but we need to acknowledge that a system that determines rewards on the basis of publication in certain journals will foster poor practice and fraudulent behaviour. This explains why retraction rates correlate so strongly with impact factor.
Just as important, many papers are rejected that, with a different set of reviewers or a different editor might well have made the cut. Clearly the number of reviewers has to be restricted to keep the task of pre-publication review manageable*** but the stochastic nature of the decision-making process that results from relying on a small set of judgements is too often overlooked.
The selectivity problem is exacerbated in top journals which restrict entry for logistical reasons – the number of printed pages – and for brand protection. Nature, for example, only has room for 8% of the manuscripts it receives. Such practices presently act as an arbitrary and unfair filter on the careers of scientists because the reification of journal prestige or impact factor shapes presumptions and behaviour. Those who fail the cut on a particular day are likely to be disadvantaged in any downstream judgments. The mark of achievement is no longer necessarily ‘a great piece of science’ but ‘a paper in Nature, Science, Cell etc’. I’ll say it again: these can often be the same thing. But not in every case and best practice has got to be to always, always, always evaluate the paper and not the journal it is published in.
No-one designed scholarly publication to have all these in-built flaws but the evident fact of the matter, widely agreed at the meeting, is that it has become a slow and expensive filtering service that distorts researcher evaluation and is no longer doing an optimal job of disseminating scientific results.
As you see, I too can articulate the problem. But what about the solutions?
Possible ways forward
For the first morning of the meeting I felt myself slipping into a funk of despair as the discussion orbited the black hole of the impact factor. It has become so familiar in the cosmology of scholarly publication that people are resigned to its incessant pull.
However, that dark mood was slowly dispersed as occasional voices spoke insistently to press for answers. For me the bright spot of the conference was realising that the best way forward is likely to be in small pragmatic steps. Every so often one of these may turn out to be revolutionary. PLOS’s invention of the mega-journal based on ‘objective peer review’ was one such case. So here are three more step that I think are feasible, if all the key players in scholarly communication can be persuaded to really take on board the wrongness of journal impact factors as a measure of quality.
ONE: Academics and institutions (universities, learned societies, funders and publishers) should sign up to the principles laid down in the San Francisco Declaration on Research Assessment (DORA). These seek to shift attention away from impact factors and to foster schemes of research assessment that are more broadly based and recognise the full spectrum of researcher achievements. Ideas for how to put such schemes in practice are available on the DORA web-site.
It is a real shame – and somewhat telling of the powerful grip of impact factors on scientific culture – that only a handful of UK universities have so far signed up to DORA (Sussex, UCL, Edinburgh and Manchester, at time of writing). I hope more will step up to their responsibilities after reading the warnings about the mis-use of the impact factor in the Leiden Manifesto on research metrics that was published in Nature just last week. I would like to think it will become a landmark document.
TWO: All journals with impact factors should publish the citation distributions that they are based on. The particular (and questionable) method by which the impact factor is calculated – it is an arithmetic mean of a highly skewed distribution, dominated in all journals by the small minority of papers that attract large numbers of citations – is not widely appreciated by academics. In the interests transparency, and to take some of the shine off the impact factor, it makes sense for journals to show the data on which they are based, and to allow them to be discussed and re-analysed. Nature Materials has shown that this can be done. I invite journals that might object to a move that would improve authors’ understanding and help to promote fairer methods of researcher assessment to explain their reasoning.
The difficulty with this proposal, as pointed out at the meeting, is that the data are amassed by Thomson-Reuters who have so far refused to release them. This practice can now be declared contrary to the 4th principle of the Leiden manifesto, which holds that data collection and analytical processes should be open, transparent and simple. If Thomson-Reuters are not interested in being part of the solution to the problem of impact factors, they have to be side-lined. Perhaps there are ways to crowd-source the task of assembling journal citation distributions? I would welcome suggestions from tech-savvy readers as to how this might be achieved in practice.
THREE: The use of open access pre-print servers should be encouraged. These act as repositories for manuscripts that have yet to be submitted to a journal for peer review and have the potential to short-circuit the delays in publication caused by the chase for journal impact factors.
I’m not familiar with the situation in chemistry but know that preprint servers have come rather late to the life sciences, inspired by the arXiv that has been operational in many sub-disciplines in physics and maths since 1991. bioRxiv (pronounced ‘bio-archive’) has 1300 preprints, while the PeerJ Preprints tally has just passed 1000. These are low numbers – the arXiv has over a million depositions – but they are growing and there was such widespread support for preprints at the Royal Society meeting that I began to get a sense that they could be transformative.
Preprints speed up the dissemination of research results (albeit in an early form) – and that is likely to be healthy for the progress of science. Uptake will increase as more researchers discover that fewer and fewer journals disallow submissions that have been posted as pre-prints. Nay-sayers might contend that there are risks in disseminating results that have not been tested or refined in peer-review but such risks can be mitigated by open commentary. As noted in the meeting by bioRxiv’s Richard Sever, preprints tend to attract more comments than published papers because commenters feel that they can make a positive contribution to shaping a paper before it is published in its final form. Such open review practices have been harnessed successfully by the innovative open-access journal Atmospheric Chemistry and Physics.
Preprints may not solve the problem of impact factors, since the expectation is that most or all will eventually submitted for competitive publication in a journal; however, they could offer some mitigation of their worst effects, especially if authors were to be recognised and rewarded for the speed at which they make their results available. That’s something for funders and institutions to think about. Moreover, because they have digital object identifiers and are citable objects, they provide a rapid route for establishing the priority of a discovery and a cost-effective way to publish negative results.
There is more to say and more dimensions of the arguments made here to tease out but I will leave things there for now. All of the above may be no more than the optimistic after-glow of the meeting but I submit my three proposals for serious consideration. I hasten to add that none of them are original, and I know that I’ve written before about pretty much everything discussed above – but we have to keep these issues alive. To paraphrase Beckett, it’s our only hope of finding ways to fail better.
*’Objective peer review’ was the term adopted at the meeting to describe the type of peer review pioneered by PLOS ONE, which will accept papers that report an original piece of science that has been performed competently. The impact or future significance of the work are not considered. I am not comfortable with the term since this type of review is clearly not ‘objective’ but it will have to do for now.
**Nature has recently introduced a double-blind review to combat this. However, it is optional for authors and I find it hard to imagine a group that has a history of publication in Nature opting for it.
***Richard Smith (formerly editor of the BMJ), who spoke powerfully at the meeting to criticise peer review, and proponents of post-publication peer review will no doubt take issue here.