A vision for a better future – using new tools of openness and transparency to improve the scientific process

This is a guest post by Pete Binfield and Jason Hoyt, co-founders of the open access journal PeerJ. I don’t make a habit of running posts from private companies here at Reciprocal Space but have been impressed by the innovative model of open access publishing that PeerJ represents and was glad to be able to provide them with a forum to expound on their publishing philosophy. No payment was made or requested. The views presented here below are entirely their own. 

The academic community tends to view peer reviewed journal articles as the most important thing to considered when evaluating a contribution, or an individual. But is this actually the best we can manage? Or can we apply modern tools and more enlightened thinking to come up with new and improved ways to measure a contribution to the scientific enterprise?

Journal articles are regarded as the ‘minutes of science’ – a supposedly perfect (and permanent) formulation of ‘the final answer’. And once an article is published, it is typically measured by a simple count of the number of citations it receives (or, disturbingly, to measure its worth based on a count of the citations that other articles in the same journal happened to receive).

And yet, the journal article is only a single point in time in the lifecycle of a piece of work. When we only judge an article, we ignore the individuals behind it. And when we judge individuals based only on the article they have written, then we do not take account of the processes (both good and bad) that have led up to that point, or beyond. And when we judge those articles based only on scholarly citations, then we do not take account of the myriad of other ways that the publication has contributed to the scientific enterprise. In all of these ways, we believe that the process can be improved thanks to the development of new tools and new ways of thinking.

The typical lifecycle of any new finding is to research it; to discuss and develop it; to formally publish it; to influence others by the act of publication; to accrue recognition for having done the work; and to learn from the experience when starting the next research project. Let’s deconstruct each step.

The research: Any research article starts with original research, and typically this research is kept private – confidential to the researcher and their lab. And yet, there is an increasing belief that if you shine a light upon a process, the process itself is improved. As a result, there has been a slow, but growing movement towards ‘open notebook science’, which is an attempt to persuade researchers that by openly sharing their original research, in real time, they can improve their work. Although this degree of transparency may be too much for most researchers, it is a movement which is slowly gaining traction, and it is clearly an attempt to document the origin and development of a scientific thought at the earliest possible stage.

Discussion and Development: Once the research is conducted, the next logical stage in the process of documenting a piece of work is to develop a draft, often in the form of a preprint (or, in some fields, an abstract or a poster at a conference) and to develop that draft in light of feedback. A preprint is simply an early version of something which will later evolve into a formal publication and is typically un-peer reviewed. Many fields have retained a preprint culture (for example the physics, mathematics, social sciences and economics fields all have active preprint servers), however the biological and medical fields are notable for their absence. But if a preprint culture were to flourish in the biological and medical sciences, it could potentially be the missing link in the story that this post explores.

In a recent Scientific American guest blog post, we explored some of the reasons for the lack of a preprint culture in the biological and medical sciences. In that post we explained that we at PeerJ have put our money where our mouth is, by launching a new preprint server for the biological and medical sciences (PeerJ PrePrints) and why we think this can now be successful. If we can’t persuade the biological and medical fields to adopt a culture of open notebook science, then perhaps we can at least persuade them to adopt a preprint culture. Certainly these disciplines have been at the forefront of the Open Access movement, and so it is not too great a conceptual leap to go from openly sharing a final published article, to openly sharing the earlier drafts that led up to it.

The Formal Publication – Not the End of the Story: We mentioned that the journal article of today is a fossilized manifestation of the ‘minutes of science’. If we accept that we can show the evolution of the finding, through discovery (in an open notebook) and early formulations (in a preprint), then why should we accept that the evolution of this work stops at the moment it is published in a journal. Surely a journal article can be in error and require revision, or new results might strengthen or weaken the case? Instead of publishing an entirely new article (simply to put another ‘counted’ publication into your resume) why not revise and extend the existing publication? Again, this might be asking for too much from today’s scholarly society, but once again we can see that the ability to publish drafts (aka preprints) and to measure their unique contributions, could be applied to this use case as well.

Measuring the Contribution: Many people are now building what they hope will be better tools to evaluate academic contributions. Most visibly, this manifests itself in the ‘altmetrics’ movement (also known as ‘article level metrics’ when only concerning itself with an article). Altmetrics attempt to measure every way a piece of work might influence the wider world – be that scholarly citations; mentions in Wikipedia; media coverage; tweets; online usage; blog posts; or changing government policy. This approach has great promise as a way past the impasse caused by simply counting one metric (the scholarly citation) and is being increasingly used by publishers such as PLOS, BMC, PeerJ, Frontiers and eLife to provide richer metrics on their published articles. But the potential is far greater than this limited scope might show – the beauty of the approach is that it can be applied to any digital object – it can be applied to open notebooks, to data sets, to software code, and to preprints too (all of which typically receive zero scholarly citations).

Gaining Feedback, Discussing, Debating and Learning: It is important to talk about the ‘discussion’ that goes on around a piece of work, or the questions and answers that are posed to experts in the field. A finding could be the most impressive one in history, but if an author refuses to respond to criticisms, or does not engage with their follow academics in developing the work further, then an opportunity has been lost.

One solution to this problem is to open up the peer review itself process to the broader public in a practice known as ‘open peer review’. With open peer review, we can encourage (or require) reviewers to provide their names, and publish the reviews alongside the published article. This opens up a previously secretive process to maximum transparency, it means that reviewer contributions are not lost to posterity, and it means that reviewers can get credit for having performed their review. At the same time, authors can show the evolution of their work and how it has been improved by appropriate peer review. At PeerJ we operate a form of Open Peer Review (along with several other publishers such as BMC, eLife, the BMJ etc) and the reception has been overwhelmingly positive.

Another way to track the discussion, of course, is with altmetrics. Even if the publisher doesn’t provide their own ability to comment on an article, people will comment in spaces of their choosing – on twitter, facebook, or in a blog post. With altmetrics, those conversations too can be collated and evaluated.

Where Does This Leave Us: Imagine for a moment, the tenure or promotion committee of the future. When evaluating a candidate, then instead of scanning down a list of where (or alongside whom) they have published, the committee insteads scans down a list of their actual contributions and specific findings (things which today are only available in formal publications). They can see which findings were well received by the community; which generated the most debate; which ones were most rigorously defended by the author. They can drill backwards in time to see where the idea originally came from; they can see whether or not an author is openly sharing their work; putting it into the world for early feedback; trying to better their work through well informed revisions. They can look at a suite of metrics to see if the work was picked up and reused by others; did it inform governmental policy; was it read and commented upon by top academics; did it have a true impact? And from start to finish they can see and track the entire lifecycle of each contribution – from original idea, through early drafts; formal publications; and subsequent revisions and iterations.

This future scenario may actually be the nightmare scenario for the people who serve on those committees, and certainly it is a lot more complex than the situation as practiced today. But if it can be done correctly, then surely it would provide a more holistic and nuanced evaluation of the contribution of an individual, or of their work. Evaluation ‘in the round’ not ‘at a point’.

Researchers would no longer be thought of as simply ‘authors’ (which is only the end point in their process of research) – instead they can demonstrate their entire contribution and position within their community. They can show how their data sets or software commits contributed, for example. They can show how they are a good actor within the community – commenting on, and helping to improve the work of others. They can finally get credit for work that previously went unnoticed in today’s environment which only cares about their published output.

At PeerJ, we consciously avoid a measurement of significance, impact, novelty, or degree of advance before publication. Instead we are building tools which will allow the community to determine these things for themselves by looking at the work ‘in the wild’ (and increasingly ‘in the raw’). With our new preprint server (PeerJ PrePrints) people are able to ‘show their workings’ and place an early draft in front of their community for broader feedback (using feedback functionality which has just been launched); the peer review process of the PeerJ journal then asks reviewers to only comment on the validity, or soundness, of the science presented and at the same time promotes transparency by encouraging reviewers to name themselves; authors are encouraged to publish their peer review and revision history alongside their article; and finally the published article (as well as any preprint versions) is made available with a suite of article level metrics including usage data, referral stats, social media mentions, scholarly citations and so on. And throughout every interaction with the PeerJ system, people can accrue credit for their activities as authors, reviewers, editors, or commenters using a new system of ‘user contribution’ points. As can be seen from the arguments above, we believe that by encouraging this level of transparency, and by exposing the lifecycle of a scholarly manuscript, the work of academics can be more fairly evaluated and more effectively built upon.

Of course, we are fully aware that change like this will not happen overnight, but we do believe that change of this nature is now underway. We see the growing popularity of preprint servers like the arXiv, SSRN, or RePEc (and our own PeerJ PrePrints); we see the development of new sharing tools such as Mendeley and FigShare; we see a string of experiments in the peer review space such as PeerJ, Rubriq,  F1000 Research and eLife; we see successful companies being launched in the altmetrics space such as Impact Story, Altmetric, and Plum Analytics; and we see the now unstoppable rise of the open access movement which will place all content online in a maximally sharable format. It feels to us like all the pieces are now in place. It simply needs the will of the academic community to embrace the change, and to move themselves forwards.

This entry was posted in Open Access and tagged , , . Bookmark the permalink.