We’ve all got troubles (including the Open Science Framework)

Surprisingly to some and not-so-surprisingly to others, we scientists have our own fair share of troubles in the way we perform our day job – bias, fraud, irreproducibility, lost results, bad data management, difficulty in publishing non-conclusive results. We also have trouble with finding research funding, pressure to publish in high profile journals, pressure to ‘make it’ (whatever that means) in this big, bad scientific world. The list could go infinitely on.

Philip Ball’s recently published article The Trouble with Scientists, has a lot of interesting stuff to say about bias and reproducibility in professional science. As usual for Philip Ball, he presents a clear narrative and a balanced view of practising science as we know it. What is particularly good is that the article itself encompasses the difference between bias and out right fraud, in a non-blamey kind of way. He doesn’t couch scientists as evil, careless fraudsters – which seems to be implicit in many of these types of articles about trouble in science. Of course, I am a scientist and a human, so I might just be guilty of feeling a wee bit defensive.

The Trouble with Scientists focuses, in part, on a man (Nosek) who has a vision of a scientific utopia. A number of his ideas I agree with, such as trying to make data more open. Personally I think openess is less of a problem than having the ability to sift through all the nitty gritty details of someone else’s experiments, but nevertheless data being more open is not a bad idea in principle. Nosek’s vision is to correct some of the problems within professional science via an Open Science Framework. His idea

is that researchers “write down in advance what their study is for and what they think will happen.” Then when they do their experiments, they agree to be bound to analyzing the results strictly within the confines of that original plan.

As a scientist, I find this vision more than a bit naive and quite limiting. Active scientific research is not some sort of laboratory practical where you design an experiment and know the result beforehand. Nor is it binary, usually the answer is NOT a simplistic does-this-fit-my-hypothesis yes or no but more like ‘maybe’ or even ‘WTF?’. Usually results lead you to modify your original hypothesis, leading to further experiments. What Nosek’s vision seems to imply is that this is not the right way for scientists to think about their research, rather we scientists must stick within strict bounds and DO NOT DEVIATE, no matter how much you might think this is the way forward. This stricture perhaps would prevent error but I doubt it, I think more likely it would do much to stagnate scientific discovery- slowing science down to some kind of snail’s pace of tedium.

Recently my group published a paper where we put forth the hypothesis that water helps mediate protein folding in the initial stages of the protein folding process. Our data indicate that this might be the key to understanding how proteins start to fold in nature. What our data and analysis thereof DO NOT do is PROVE that this is the key to understanding how proteins fold in nature. We merely put forth this hypothesis, supported by our data. Based on this result, we are actively trying to prove or disprove (with the emphasis on disprove) our hypothesis, with further experiments. We also published this paper NOW (well OK 2 years ago but close to now) because we want to get that hypothesis out there in the literature so others can support it or disprove it. We do not want to wait until we can design the perfect, definitive experiment. In science definitive experiments are pretty hard to come by anyway, until we have technological advances or can build the LHC – look how long Higgs had to wait, and that was relatively fast for science.

gpg and water
A representation of water mediating a peptide to help it fold in solution (Busch, et al. Angew. Chem. 2013)

Our original experiment design/hypothesis was something slightly different. We merely wanted to look at the hydration of different functional groups in proteins. If we had been restricted to strictly sticking to our original plan, we would have ignored/overlooked this hypothesis from our data. Nosek seems to argue this is exactly what we should do have done and that if we came up with a new hypothesis we should log that into the book and then go from there. In my mind it is better to get your hypothesis out there ASAP so that others can have a look and either support or refute your findings. This seems a long way away from bias in my mind. In fact I would argue that getting your hypotheses out there in public helps TO eliminate bias. Of course we think we are right, it is OUR pet hypotheses, but the reality is that we may not be and that is where (we hope) the wider scientific community can help. A scientific community who is much less biased than we are about out own hypothesis.

About Sylvia McLain

Girl, Interrupting aka Dr. Sylvia McLain used to be an academic, but now is trying to figure out what's next. She is also a proto-science writer, armchair philosopher, amateur plumber and wanna-be film-critic. You can follow her on Twitter @DrSylviaMcLain and Instagram @sylviaellenmclain
This entry was posted in Bias, scientific publishing and tagged , . Bookmark the permalink.

2 Responses to We’ve all got troubles (including the Open Science Framework)

  1. David Mellor says:

    A colleague forwarded this post to me and I wanted to comment on some of your points. Let me begin by letting you know that I appreciate your response to our initiatives at the Center for Open Science (http://cos.io) and value the conversation within your community around fixing some of the unintended biases that are present in all of our fields. Part of our mission is to increase the openness and reproducibility of science across disciplines, and having this dialogue with biochemists is key to understanding the processes across different fields.

    I am working on encouraging the practice of pre-registering hypotheses and analysis plans and feel that the process addresses the legitimate concerns you highlighted. You raise the point that pre-registration would force researchers to ignore findings as they emerge from the incoming results, thus limiting scientific inquiry. However, the intention of pre-registration is not to constrain scientific inquiry at all, but rather to introduce transparency into the reporting of outcomes by identifying whether investigations were confirmatory or exploratory in nature. That is, pre-registration makes explicit the distinction between confirming a pre-specified hypothesis through a predetermined analysis compared to the generation of post-hoc hypotheses through exploratory data analysis. Both are necessary, but distinguishing between the two is important for the clarity of the resulting manuscript.

    When reporting on the findings of a pre-registered study, any serendipitous findings would be noted and highlighted as exploratory. Separating the analyses that are confirming pre-specified hypotheses from those that are exploring new ones highlights the relative evidence behind each. The reason for this distinction is that analyses resulting from looking at the emerging results are much more likely to lead down a path of decisions to the subset of analyses that are statistically significant (the aptly named Garden of Forking Paths https://osf.io/3k5ap/).

    Two additional points you raise are that pre-registration would slow the pace of discovery and that supporting or rejecting any particular hypothesis is rarely a clear “yes or no” proposition. In regards to the pace of discovery, I posit that pre-registration is just as likely to accelerate the pace of discovery. Making critical analytical decisions early in an investigation clears the road ahead once data collection begins. Later readers will also have an easier time determining which effects are ripe for replication efforts, thus improving the overall pace of research. However, I admit that this reasoning is simply conjecture and is worthy of future investigation. On the final point about the messiness of interpreting the results: a key reason for pre-registering our analytical plans is to make the criteria for those messy judgement calls upfront, before seeing the data. This prevents us from letting those decisions be influenced by our unscientific incentives.

    More has been written about pre-registration than I can dive into here. Some additional resources on this topic can be found here: Nosek, Spies, & Motyl, 2012 (https://osf.io/f2dem/), Chambers et al., 2014 (https://osf.io/qpsa4/), Nosek & Lakens, 2014 (https://osf.io/e4nxu/)

    Thank you for blogging about these issues!

  2. Laurence Cox says:

    Nosek’s approach struck me as idealised. I can see its value in certain areas like pharmacology and, in principle, it is a good idea to define how you are going to analyse the results of your experiment before you undertake it, because different analysis approaches might give significant/non-significant results for the same data (not that I imagine that you would choose an analysis approach that gave the ‘best’ result, but with pressure to publish positive results one can imagine that some would be tempted).

    The question is what do you do when your experiment gives an unexpected result? Ideally, you run the experiment again, having thoroughly checked your process and see if it gives the same result. However, there will be times when this is not practical, for example the BICEP2 experiment looking for gravitational waves, and the only way forward then is to re-analyse the data with different assumptions.

Comments are closed.