Surprisingly to some and not-so-surprisingly to others, we scientists have our own fair share of troubles in the way we perform our day job – bias, fraud, irreproducibility, lost results, bad data management, difficulty in publishing non-conclusive results. We also have trouble with finding research funding, pressure to publish in high profile journals, pressure to ‘make it’ (whatever that means) in this big, bad scientific world. The list could go infinitely on.
Philip Ball’s recently published article The Trouble with Scientists, has a lot of interesting stuff to say about bias and reproducibility in professional science. As usual for Philip Ball, he presents a clear narrative and a balanced view of practising science as we know it. What is particularly good is that the article itself encompasses the difference between bias and out right fraud, in a non-blamey kind of way. He doesn’t couch scientists as evil, careless fraudsters – which seems to be implicit in many of these types of articles about trouble in science. Of course, I am a scientist and a human, so I might just be guilty of feeling a wee bit defensive.
The Trouble with Scientists focuses, in part, on a man (Nosek) who has a vision of a scientific utopia. A number of his ideas I agree with, such as trying to make data more open. Personally I think openess is less of a problem than having the ability to sift through all the nitty gritty details of someone else’s experiments, but nevertheless data being more open is not a bad idea in principle. Nosek’s vision is to correct some of the problems within professional science via an Open Science Framework. His idea
is that researchers “write down in advance what their study is for and what they think will happen.” Then when they do their experiments, they agree to be bound to analyzing the results strictly within the confines of that original plan.
As a scientist, I find this vision more than a bit naive and quite limiting. Active scientific research is not some sort of laboratory practical where you design an experiment and know the result beforehand. Nor is it binary, usually the answer is NOT a simplistic does-this-fit-my-hypothesis yes or no but more like ‘maybe’ or even ‘WTF?’. Usually results lead you to modify your original hypothesis, leading to further experiments. What Nosek’s vision seems to imply is that this is not the right way for scientists to think about their research, rather we scientists must stick within strict bounds and DO NOT DEVIATE, no matter how much you might think this is the way forward. This stricture perhaps would prevent error but I doubt it, I think more likely it would do much to stagnate scientific discovery- slowing science down to some kind of snail’s pace of tedium.
Recently my group published a paper where we put forth the hypothesis that water helps mediate protein folding in the initial stages of the protein folding process. Our data indicate that this might be the key to understanding how proteins start to fold in nature. What our data and analysis thereof DO NOT do is PROVE that this is the key to understanding how proteins fold in nature. We merely put forth this hypothesis, supported by our data. Based on this result, we are actively trying to prove or disprove (with the emphasis on disprove) our hypothesis, with further experiments. We also published this paper NOW (well OK 2 years ago but close to now) because we want to get that hypothesis out there in the literature so others can support it or disprove it. We do not want to wait until we can design the perfect, definitive experiment. In science definitive experiments are pretty hard to come by anyway, until we have technological advances or can build the LHC – look how long Higgs had to wait, and that was relatively fast for science.
A representation of water mediating a peptide to help it fold in solution (Busch, et al. Angew. Chem. 2013)
Our original experiment design/hypothesis was something slightly different. We merely wanted to look at the hydration of different functional groups in proteins. If we had been restricted to strictly sticking to our original plan, we would have ignored/overlooked this hypothesis from our data. Nosek seems to argue this is exactly what we should do have done and that if we came up with a new hypothesis we should log that into the book and then go from there. In my mind it is better to get your hypothesis out there ASAP so that others can have a look and either support or refute your findings. This seems a long way away from bias in my mind. In fact I would argue that getting your hypotheses out there in public helps TO eliminate bias. Of course we think we are right, it is OUR pet hypotheses, but the reality is that we may not be and that is where (we hope) the wider scientific community can help. A scientific community who is much less biased than we are about out own hypothesis.