The ability of scientists to reproduce published experimental data from other laboratories is the foundation for all scientific advance. Indeed, the whole point of publishing is to educate other scientists (and the public in general) and to build a scaffold of information that will allow others, immediately or at some indeterminate time in the future, to build on the published findings and advance them to the next level. It is no wonder that recent concerns with the ability to reproduce published biomedical research have been taken very seriously by the scientific community.
How prevalent is the problem? A recent article by Monya Baker in Nature magazine sheds some light on this question; in effect, it depends on the field of science that one examines. Not surprisingly, perhaps, the more quantitative and exact sciences of physics and chemistry are a lot higher on the reproducibility scale (according to 1576 scientists who were surveyed). This, of course, is unlikely to be due to lower ethical standards in biomedical research, than, say, physics research; most likely it stems from the inherent variability in biological samples. While I will discuss some of the apparent causes for irreproducibility below, it is clear that there is great significance in ensuring the highest degree of reproducibility possible in biomedical science. Time, money, the honor of the biomedical research community, and ultimately, lives are at stake.
While there is some disagreement over the scope of the problem, I think there is likely a strong consensus among scientists that we need to do as much as we can to limit irreproducibility in science. So what should and shouldn’t be done to address the problem?
Many issues and concerns in academia and other walks of life are often superficially treated by adding administrative burden merely for an outward demonstration that “something must be done,” without taking practical steps to solve the problem. Weighing in this way is a serious mistake, which actually exacerbates the problem, because it diminishes the significance that should be attached to the issue. Unfortunately, the National Institutes of Health (NIH) has adopted such bureaucratic measures, which include compelling researchers who submit grant proposals to “go through hoops” and provide a series of explanations of how they will ensure that there studies will be maximally reproducible. This, however, is unlikely to lead to any real changes in reproducibility, because researchers will simply write what they are being asked to write, in order to be eligible for funding. Simply writing that “we will verify cell lines, or test antibodies in our studies” will not improve the situation. The litmus test for reproducibility needs to be on ‘what we have done and intend to publish’ rather than on speculating vaguely on what we will do in the future if we obtain funding.
So where can reproducibility be effectively implemented? The key place to enhance reproducibility is in peer review for publication–in ensuring that papers published in scientific journals are well vetted by expert reviewers, and that the experimental design and reagents used are extremely well described. For example, the shunting of the Experimental Design section of papers to a hard-to-find supplement is unacceptable. Experimental Design (also known as Materials and Methods), should ideally be found following the Introduction section in a prominent and visible place within the manuscript. Not only should all the experiments be described in considerable detail (enough detail so that any graduate student is able to carry out the experiment described), but all of the reagents and antibodies used must be listed with their catalog numbers (and where applicable, lot numbers). This is because some reagents, such as polyclonal antibodies, are derived from the serum of animals such as rabbits. When the rabbits eventually die, and new antibodies are generated in new animals, the resulting immune response and quality/specificity of the new antibodies may be quite different than the initial batch. This is a potential problem with considerable complications, and has led to avid discussion about the use of antibodies in scientific research. While there are issues with the use of antibodies, I believe that they are too important to sideline, as suggested by some.
Scientific journals have an inherent responsibility in ensuring that the peer review process includes a careful examination of factors relating to reproducibility. While some of the factors are merely properly cataloging the reagents used, other considerations are whether the authors carried out appropriate controls to determine that the reagents (within reason) are valid or validated. This is a central and key reason why so-called “post-publication review”–or those in favor of letting readers decide on the value and significance of published papers (rather than expert reviewers) is a dangerous idea, at least for biomedical research. Many of the validations and controls that need to be done to ensure reproducibility are specific for the field of research, and any interested reader who is not acquainted with the field directly may not be able to judge such reproducibility criteria.
To make things even more challenging, many today view the internet as “the great equalizer” giving everyone an equal voice. By this scenario, outstanding journalists who write for the New York Times are no better than any individual with a Twitter account or a blog site. By the same token, for some strange reason, there are those in the realm of science who think that any website that calls itself a “scientific journal” has an equal voice to bona-fide scientific journals that carefully select editorial board members and reviewers. Every scientist in the realm of biomedicine today must be familiar with the endless junk-mails that arrive in our inboxes with requests to join editorial board. Just yesterday, another of these popped into my mailbox:
Dear Dr. Caplan, Steve,
Greetings for the day
We are completely aware of your busy schedule; however, we are taking the liberty to remind you again regarding our proposal to you, to be an honourable Editor for Source Journal of Ophthalmology. Because we didn’t hear any reply from your side. Please accept our sincere apologies if this mail causes any inconvenience.
Awaiting for your positive reply
We are glad to introduce our new journal Source Journal of Ophthalmology to you. Source Journal of Ophthalmology is a new journal launched by the Source Journals. With an open access publication model of this journal, all interested readers around the world can freely access articles online at http://sourcejournals.com/journal/source-journal-of-ophthalmology-sjop/ without subscription.
We are soliciting scholars to form the editorial board. This is our immense pleasure to invite you as an Editor of our journal. We aware of your international reputation, moreover, your unmatched expertise and experience will help the journal. We would be fortunate if you accept our request. If you are interested, please send your resume along with your areas of interest to email@example.com.
- Articles submitted by Editors will be published at free of cost.
- Based on the quality of the manuscript suitable waiver can be provided for your recommended manuscripts (co-workers, students etc., referred by you).
Editor roles and responsibilities:
- Processing of articles by assigning reviewers for the manuscript and making decision by considering the reviewer’s comments.
- Active interaction with Source blog members.
- Monitoring and suggestions to improve the standards and quality of Source Journals.
Kindly let us know your willingness by sending us the following
- Complete CV
- Short Biography
- Research Interest
Looking forward to work with you
Aside from the embarrassing grammar, this type of very common scam-request raises crucial ethical concerns about the publishing process: if I am invited to serve on an editorial board for ophthalmology, when I have never worked in this area and have no specialization in the field, then the editorial board and review process for such a journal renders it useless. And this is but a single example of hundreds of such requests and cases that I receive (and I suspect most researchers receive) on a daily basis. The obvious concern with regard to reproducibility, is that even in highly stringent, peer reviewed journals that carefully select editors and reviewers, issues with reproducibility will crop up–but in journals such as these where there are no standards? It is a wonder there is any reproducibility in anything published in such journals.
So while many researchers typically ignore such requests and avoid such journals like the plague, there must be some critical mass of researchers that publish and read these journals, otherwise there would be no market for them. Along with the above-noted suggestions for dealing with irreproducibility, I believe that it is necessary for scientists to take on these scam journals and expose them for what they are, because they lower the standards and credibility of scientific research. We must take a stand in denouncing and actively dissociating themselves from such damaging enterprises.
However, there is much more that can be done to promote greater reproducibility in biomedical research, because as noted, there are concerns even in the most respected and prestigious journals. Some of the ways may need to be determined specifically for individual fields, as organized by researcher Dr. Daniel Klionsky for the field of autophagy. Society journals, such a the Journal of Biological Chemistry (from the American Society of Biochemistry and Molecular Biology; ASBMB) or Molecular Biology of the Cell (from the American Society of Cell Biology; ASCB) have taken initiative and either have new regulations for peer review, or publish important policy papers regarding reproducibility. Organized societies such as ASCB and ASBMB are crucial for providing the leadership and impetus for researchers to implement best practices, and I look forward to continued discussions about making biomedical science more reproducible.