Can we amend the laws of scholarly publication?

As part of its celebrations to mark the 350th anniversary of the publication of Philosophical Transactions, the world’s longest-running scientific journal, the Royal Society has organised a conference to examine ‘The Future of Scholarly Scientific Communication’. The first half of the meeting, held over two days last week, sought to identify the key issues in the current landscape of scholarly communication and then focused on ways to improve peer review. In the second half, which will be held on 5-6th May, the participants will examine the problem of reproducibility before turning attention to the future of the journal article and the economics of publishing.

The luminaries who assembled in the Royal Society’s commodious headquarters in Carlton House Terrace included publishers, academics, representatives from funding agencies and learned societies, several journalists, and a smattering of FRS’s and Nobel laureates – all well versed in the matter at hand. A report of the deliberations will be published in due course but I wanted to work through a central problem that surfaced repeatedly in the meeting last week.

Tim Berners-Lee – portrait at the Royal Society, London

The Royal Society’s portrait of Tim-Berners-Lee, without whom we wouldn’t be having this conversation.

 

The central problem

The discussion on the first day – vividly live-blogged by Mike Taylor – was an attempt to define the challenges facing scholarly publishing in the 21st century and covered territory that will be familiar to anyone who has read up on open access. The debate kept circling back to the same basic point: the over-weening influence of impact factors and prestige journals, which have academics and publishers locked in an unhealthy relationship that seems increasingly unable to harness the opportunities presented by a digital world. It turns out that pretty much everyone can articulate the present difficulties – the hard bit is finding workable solutions.

As I sat at the back of the room absorbing the proceedings, it struck me that the crux of the problem could be distilled into two laws of publishing. During the meeting I put them out on Twitter in a somewhat light-hearted way but there’s a serious point here.

First law: To get maximum credit for your work, publish in a journal with a high impact factor.

Second law: For maximum speed of publication, choose a well-run mega-journal with ‘objective’ peer review* or, better yet, a preprint repository.

The problem with these laws is that they are incompatible and have given rise to unintended consequences, the central one being the creation of a system in which incentives for researchers are lashed to a slow and unwieldy process of publication. As presently configured academic publication is a competitive business in which journals vie with one another for the best quality science, or at least for the papers they think will boost their impact factors since that is one of the best ways to build a brand. As a result they have become the locus of the competition between researchers, who are now obsessed with the prestige points awarded by journals as the means to win jobs, promotion or funding. The ensuing chase for publication in ‘top journals’ retards publication because researchers are willing to submit to multiple rounds of submission, rejection and re-submission as they work their way down the hierarchy of titles.

There are other adverse effects (which will be considered below) but the solution to the problem has to be to weaken the coupling between the location of publication and the assessment of research and researchers.

That is easier said than done. The coupling cannot be broken in a free market because of the competition between journals, which is of course fuelled by the competition between researchers. The competitiveness inherent in the system is not necessarily a bad thing since it can act as a spur for scientists to do their best work. The problem is that publication in a particular journal is too easily seen as the final judgement on any given paper. If it’s in Nature (to choose a prominent example), it must be world-class – such is the argument that I have heard repeatedly in common rooms, grant panels and hiring committees. This line of reasoning is seductive because there is some truth in it – Nature publishes a lot of great science. Unfortunately for too many people the argument contains enough truth for them to ignore the fine detail and from there it is a short step to take the journal impact factor as a proxy for the quality of a given piece of research. But it is vital to tease out and neutralise all the problematic aspects of our over-reliance on journal prestige. There are big wins to be had for scholarly publication and for researcher evaluation if we succeed.

The simplistic but widespread view that good journals publish good papers is some distance from being the whole truth of the matter. Some papers get into Nature or Science because they catch a trendy wave but then fail the test of time – arsenic life, anyone? That’s not to shout down journal predilections for trendy topics, which seem to be inevitable amid the ebb and flow of scientific tides but we should be mindful that they occur and calibrate our expectations accordingly. Some papers are published in top journals because they come from a lab at a prestigious university or one that has a good track record of publication and so may benefit from a type of hindsight bias among editors and reviewers**. Some get in because exaggerated claims or errors or fraud are not picked up in review. Again, that’s not necessarily the fault of editors or reviewers – especially in cases of fraud – but we need to acknowledge that a system that determines rewards on the basis of publication in certain journals will foster poor practice and fraudulent behaviour. This explains why retraction rates correlate so strongly with impact factor.

Just as important, many papers are rejected that, with a different set of reviewers or a different editor might well have made the cut. Clearly the number of reviewers has to be restricted to keep the task of pre-publication review manageable*** but the stochastic nature of the decision-making process that results from relying on a small set of judgements is too often overlooked.

The selectivity problem is exacerbated in top journals which restrict entry for logistical reasons – the number of printed pages – and for brand protection. Nature, for example, only has room for  8% of the manuscripts it receives. Such practices presently act as an arbitrary and unfair filter on the careers of scientists because the reification of journal prestige or impact factor shapes presumptions and behaviour. Those who fail the cut on a particular day are likely to be disadvantaged in any downstream judgments. The mark of achievement is no longer necessarily ‘a great piece of science’ but ‘a paper in Nature, Science, Cell etc’. I’ll say it again: these can often be the same thing. But not in every case and best practice has got to be to always, always, always evaluate the paper and not the journal it is published in.

No-one designed scholarly publication to have all these in-built flaws but the evident fact of the matter, widely agreed at the meeting, is that it has become a slow and expensive filtering service that distorts researcher evaluation and is no longer doing an optimal job of disseminating scientific results.

As you see, I too can articulate the problem. But what about the solutions?

 

Possible ways forward

For the first morning of the meeting I felt myself slipping into a funk of despair as the discussion orbited the black hole of the impact factor. It has become so familiar in the cosmology of scholarly publication that people are resigned to its incessant pull.

However, that dark mood was slowly dispersed as occasional voices spoke insistently to press for answers. For me the bright spot of the conference was realising that the best way forward is likely to be in small pragmatic steps. Every so often one of these may turn out to be revolutionary. PLOS’s invention of the mega-journal based on ‘objective peer review’ was one such case. So here are three more steps that I think are feasible, if all the key players in scholarly communication can be persuaded to really take on board the wrongness of journal impact factors as a measure of quality.

ONE: Academics and institutions (universities, learned societies, funders and publishers) should sign up to the principles laid down in the San Francisco Declaration on Research Assessment (DORA). These seek to shift attention away from impact factors and to foster schemes of research assessment that are more broadly based and recognise the full spectrum of researcher achievements. Ideas for how to put such schemes in practice are available on the DORA web-site.

It is a real shame – and somewhat telling of the powerful grip of impact factors on scientific culture – that only a handful of UK universities have so far signed up to DORA (Sussex, UCL, Edinburgh and Manchester, at time of writing). I hope more will step up to their responsibilities after reading the warnings about the mis-use of the impact factor in the Leiden Manifesto on research metrics that was published in Nature just last week. I would like to think it will become a landmark document.

TWO: All journals with impact factors should publish the citation distributions that they are based on. The particular (and questionable) method by which the impact factor is calculated – it is an arithmetic mean of a highly skewed distribution, dominated in all journals by the small minority of papers that attract large numbers of citations – is not widely appreciated by academics. In the interests transparency, and to take some of the shine off the impact factor, it makes sense for journals to show the data on which they are based, and to allow them to be discussed and re-analysed. Nature Materials has shown that this can be done. I invite journals that might object to a move that would improve authors’ understanding and help to promote fairer methods of researcher assessment to explain their reasoning.

The difficulty with this proposal, as pointed out at the meeting, is that the data are amassed by Thomson-Reuters who have so far refused to release them. This practice can now be declared contrary to the 4th principle of the Leiden manifesto, which holds that data collection and analytical processes should be open, transparent and simple. If Thomson-Reuters are not interested in being part of the solution to the problem of impact factors, they have to be side-lined. Perhaps there are ways to crowd-source the task of assembling journal citation distributions? I would welcome suggestions from tech-savvy readers as to how this might be achieved in practice.

THREE: The use of open access pre-print servers should be encouraged. These act as repositories for manuscripts that have yet to be submitted to a journal for peer review and have the potential to short-circuit the delays in publication caused by the chase for journal impact factors.

I’m not familiar with the situation in chemistry but know that preprint servers have come rather late to the life sciences, inspired by the arXiv that has been operational in many sub-disciplines in physics and maths since 1991. bioRxiv (pronounced ‘bio-archive’) has 1300 preprints, while the PeerJ Preprints tally has just passed 1000. These are low numbers – the arXiv has over a million depositions – but they are growing and there was such widespread support for preprints at the Royal Society meeting that I began to get a sense that they could be transformative.

Preprints speed up the dissemination of research results (albeit in an early form) – and that is likely to be healthy for the progress of science. Uptake will increase as more researchers discover that fewer and fewer journals disallow submissions that have been posted as pre-prints. Nay-sayers might contend that there are risks in disseminating results that have not been tested or refined in peer-review but such risks can be mitigated by open commentary. As noted in the meeting by bioRxiv’s Richard Sever, preprints tend to attract more comments than published papers because commenters feel that they can make a positive contribution to shaping a paper before it is published in its final form. Such open review practices have been harnessed successfully by the innovative open-access journal Atmospheric Chemistry and Physics.

Preprints may not solve the problem of impact factors, since the expectation is that most or all will eventually submitted for competitive publication in a journal; however, they could offer some mitigation of their worst effects, especially if authors were to be recognised and rewarded for the speed at which they make their results available. That’s something for funders and institutions to think about. Moreover, because they have digital object identifiers and are citable objects, they provide a rapid route for establishing the priority of a discovery and a cost-effective way to publish negative results.

 

Finally…

There is more to say and more dimensions of the arguments made here to tease out but I will leave things there for now. All of the above may be no more than the optimistic after-glow of the meeting but I submit my three proposals for serious consideration. I hasten to add that none of them are original, and I know that I’ve written before about pretty much everything discussed above – but we have to keep these issues alive. To paraphrase Beckett, it’s our only hope of finding ways to fail better.

 

Notes

*’Objective peer review’ was the term adopted at the meeting to describe the type of peer review pioneered by PLOS ONE, which will accept papers that report an original piece of science that has been performed competently. The impact or future significance of the work are not considered. I am not comfortable with the term since this type of review is clearly not ‘objective’ but it will have to do for now. 

**Nature has recently introduced a double-blind review to combat this. However, it is optional for authors and I find it hard to imagine a group that has a history of publication in Nature opting for it.

***Richard Smith (formerly editor of the BMJ), who spoke powerfully at the meeting to criticise peer review, and proponents of post-publication peer review will no doubt take issue here. 

 

This entry was posted in Open Access, Scientific Life. Bookmark the permalink.

16 Responses to Can we amend the laws of scholarly publication?

  1. Interesting thoughts on the meeting Stephen and I completely agree with you on the unhealthy influence of impact factors and the need for change.

    I don’t think the two laws you put forward are incompatible though (however much we lament the first law…). There are plenty of papers that have appeared on the bioRxiv preprint server and then in so-called high-impact journals. This at least removes the unnecessary delays in communicating results – hopefully a step in the right direction and, as Ewan Birney mentioned, one a number of academics are taking.

    • Stephen says:

      On the incompatibility of the laws – this reflects the reality as perceived by most researchers, even if it is now shifting thanks to the advent of preprint servers. There is still some way to travel before usage becomes mainstream but we can all do our bit to spread the word.

  2. Great summary, and it’s a relief for us mid-career folks that such problems are being truly tackled head-on, regularly, and by esteemed colleagues. Thanks for reporting on it.

    Additional damage caused by our collectively having adopted the current publishing model spills over into the assessment of ideas for funding. Often, if it has been funded, the mere fact of having once sold it well and managed the funds is often taken as a proxy for the actual scientific output, and certainly influences one’s chance of getting further grants. This aberration resembles the worth accorded to papers published under certain journal titles. Such output should be measured instead in published communications – with true weight accorded to the dissemination of the inevitable negative results and unsupported, abandoned but once reasonable and funded hypotheses, as well as to the occasional positive ones. That is, future funding should hinge less on whether you’ve already received funds for the idea you propose, thereby confirming that it’s no longer original, and more on whether you’ve told the community about all the results of your previously funded projects, and why you’ve changed tack, if you have.

    Just as important, many papers are rejected that, with a different set of reviewers or a different editor might well have made the cut. […] The stochastic nature of the decision-making process that results from relying on a small set of judgements is too often overlooked.”

    It’s cold comfort when you’ve been regularly refused based on “lack of experience” in your original ideas, for which the relevant experience would need to be acquired along the way. But this idea of outcomes having a stochastic element at least allows professional scientists to maintain some motivation after rejection – be it for publications or for funding. In both situations, tenacity, humility and conviction do pay off if you can afford to see them through. Our job is to ensure that current and future scientists can still afford to invest this time, because the alternative will be impoverished scientific creativity across the board.

  3. I completely agree that posting preprints is a very important first step in making scientific publication better. In order to try to help make this more common I post any interesting preprint I find on Twitter and I usually manage to find at least one per working day. I hope that if people perceive posting preprints as common, they are more likely to do it themselves.

    I also occasionally ask journals that ask me to review, if they allow preprint deposition. And if they don’t, I refuse to review (and publicize that on Twitter). If nothing else, this reduces the review burden somewhat.

    Since I follow preprints I sometime see papers posted on, e.g. arXiv, that appear in journals that don’t officially allow preprint deposition. So at least some journal don’t appear to check.

  4. I just asked this question on Twitter, too: how can I convince my co-authors, as we await the second round of reviews at a journal that’s not the one accompanying the preprint server I had in mind, to let me go ahead and upload our submission as a preprint? I am not having an easy time selling the advantages (aside from the fact that I’d do it myself, as opposed to extra work for the corresponding author).

    Their take seems to be that we could get scooped on some or all of the results as far as “real” publication is concerned, that it wouldn’t be worth it to try to demonstrate primacy after the fact with the preprint, because similar data would already be out there, and they have nothing to gain. At least the journal where we’re awaiting this second round, allows such deposition.

    Perhaps it means they’re insecure that we’ll actually get it in without further revisions, to this or any other OA journal allowing preprints, but I’m a bit at wits’ end. And it’s a regular conflict, about which it has never been important enough to me to push back. Still, I did about OA publishing, and that seems to be finally gaining wider acceptance among my colleagues.

    • Stephen says:

      That’s the nub of the problem Heather. A year or so ago I would probably have had the same reservations as you co-authors. The problem we have in the life sciences is that we’re not used to pre-prints and there isn’t a culture yet of checking the servers to see if someone has beaten us to the punch. Physicists I’ve talked to – or at least those from disciplines that use the arXiv (not all do for some reason) – are happy to accept that publication in the arXiv establishes priority.

      I think it will simply be a matter of time and habituation. I’ve submitted my first 2 preprints in the past year and it was something of a nerve-wracking experience. It makes you realise how much you rely on peer review to catch silly mistakes before you go public. I can understand that people will need a little time to get used to such practices. But it’s a growth area and I hope that will continue. I wasn’t the only one who was struck by the level of support for preprints at the meeting.

  5. Dave Fernig says:

    The second law is missing two important elements.
    1. Poor scholarship. A large proportion of citations are incorrect and lazy – we’ll cite paper X, though we haven’t read it, because everyone else is. Here we are at fault, for not training students and postdocs to cite the facts. Likely some of the problem lies in the concentration of resource in mega labs, where training (as opposed to ticking funders’ boxes) is poor.
    2. Exposure – go to meetings where there is real discussion, so small focussed meetings, and the visibility of your work will rise.

    • Stephen says:

      Hard to capture everything in a pithy law Dave but you make two good points. I’ve certainly see some sloppy citation (though one tends to notice it more when one’s own work is omitted). This is reinforced when journals put limits on the numbers of citations due to page number constraints, something that should disappear with purely digital publishing.

      The point about meetings is also important – conferences are great both for getting the word out about your own work and for hearing the latest from others. I agree that this works best in meetings with a tight disciplinary focus since these tend to have more of a disciplinary feel. Also worth noting that at such meetings, you can usually identify the quality of the work from the content of the talk or poster – you don’t need to see the name of the journal it ends up being published in.

      One other lesson I learned last year about communicating work in a way that helps to ensure it is cited is worth sharing. In 2013 we reported the structures of the VPg proteins from two caliciviruses; VPg is covalently attached to the 5´ end of the RNA genome and, as well as playing a role in RNA replication, is critical for the initiation of translation of viral proteins. Our structure spoke more to the former function and we published in J. Virol.. Last year we cited this paper in follow-up work with collaborators at Cambridge that focused on the translational role of VPg and was published in J. Biol. Chem. This resulted in a spike of downloads of our J. Virol. paper and I guess that was because people interested in viral translation had only belatedly cottoned on to the existence of our earlier work. The lesson I learned was to pay more attention to what goes in the title and abstract of a paper!

  6. Hi Stephen
    It’s not that difficult to get the necessary data if you have access to Web of Science, though I think you may hit problems with journals that publish very large N articles per year.

    Suppose I want to know the impact factor, and distribution of data it’s based on, for Research in Autism Spectrum Disorders for 2013.

    Here are the steps to go through:

    1. Search for Research in Autism Spectrum Disorders in PUBLICATION NAME, with dates specified as 2011-2013, i.e. year of interest plus 2 yr previous

    2. Select option on right hand side to create citation report.

    3. Scroll down to below the graphs to see information for individual articles. Sort these by Publication Date.

    4. Scroll to the bottom of the list and you will see options for saving. I find Excel saving is good.

    5. What we want for the Impact Factor calculation is the citation data from 2013 for articles published in 2011-2012.

    There are 2 ways to proceed. You can just save all the information and then remove the 2013 papers later when you’ve saved the information in Excel – the saving may get problematic, I suspect, if there is large N items.

    An alternative is to note the number of the last paper from 2012 and when given the option to save a range, just opt to save the range corresponding to 2011-2012 papers.

    Don’t however, be fooled into specifying the 2011-2012 range; that looks like it will work, but it then fails to give you the key information about citations in 2013.

    6. Once you have saved the information, you can open it in Excel. The impact factor for 2013 will correspond to the total count of citations in the 2013 column divided by the N papers published in 2011-2012.
    You can check this against the impact factor information provided by Thompson Reuters – right at the top of the main WOS search page under Journal Citation Reports.

    You now have the article-level citation information and can check the distributions of citations however you wish.

    • Stephen says:

      Thanks Dorothy – I may have a go at that to explore the data for a few journals.

      However, I am hoping that the practice of publishing distributions might become industry-wide. Perhaps journals can be persuaded to do it on an annual basis. Given that scientists are increasingly required to provide access to the data underlying any published result (which is a good thing), it seems reasonable to expect journals to do the same for the one data point that they are often happy to brag about each year. If not, I’d like to find some other way to force the data out into the open but ideally this would be done for every published journal and there are quite a few of them…

  7. Stephen, your two laws really highlight the conflict we now face. To combine rapid publication and deep as well as broad evaluation, we need to perform the evaluation post-publication. There’s no point guilting people into ignoring the IF. The IF will prevail until we’ve built a system for open evaluation (OE). We collected 18 visions for how to do this, which are summarised here: http://journal.frontiersin.org/article/10.3389/fncom.2012.00094/full Niko

  8. Andrew Miller says:

    Having Thomson Reuters make the Impact Factor calculation more transparent is a good idea. Seems a bit of a dark art otherwise.

  9. Bo Drasar says:

    These discussions seem to me to be focused on the problem of developed country research. The difficulties are greatest for people working in the less developed world, in local institutions without international grants.
    They need to make their work internationally accessible and to solve many of the worlds problems we need access to their findings .
    The obsession with open access and high impact with all its inherent costs means that we are in danger of having a scientific literature where only the rich have a voice.
    It may be said that open access journals often have a mechanism for waiving review and publication costs but for some implied position of supplication smacks of colonialism.
    We need serious consideration of the problems of equity.

    • Stephen says:

      You raise an important point – one that was raised briefly from the floor at the meeting, where much of the discussion was implicitly focused on the developed world.

      Scientific inequality (in access to subscription journals or to funds to pay for OA) reflects the broader inequalities extant in the world. I think many agree that ways have to be found to narrow the inequality gap by opening the door to access but it’s difficult to see how that is going to be done without some explicit effort to transfer resources from the developed to the developing world. I appreciate that this might smack of colonialism. Not sure how to tackle that. By developed countries acknowledging the advantages gained from their colonial past (where there is one)? By framing science as a global endeavour?

Comments are closed.