is what one of my collaborators told me this week.
She was talking about my science, not about my over-arching propensity to worry about everything (although I have that too). I am running a series of experiments, mostly focused on checking and double-checking my results at the moment. I am within a mere inch of writing a paper (hopefully it will be a big un!) and I want to make damn well sure I have tested my hypothesis to the limit of what I can do, in order to (realistically) support my conclusions.
As with much science, we don’t have a definitive no-way-anyone-would-argue-with-us-ever result. These kind of obvious results are few and far between in real science. We have measured something, with a variety of experimental techniques, have thrown in a few computational techniques too for good measure and everything INDICATES we see something. INDICATES. We get consistent results from each techniques which all lead to a particular conclusion. In this case our work concerns the process of protein folding (the particulars of how proteins ‘coil up’ or fold into their functional forms remains largely unknown) but it could describe many bits of science, really.
And this is as good is it is going to get for the moment, we have a lot of data that indicates something. Science builds up a picture of the natural world – one experiment, one theory and one hypothesis at a time. It is fundamentally about having a theory which fits all of the observables (things we measure or calculate) and that is it. Science is essentially, as the skeptics would point out, about evidence. Building up real measureable evidence for your hypothesis, conclusions, ideas about what the data means is the stuff of science.
So why do I worry? I worry I have missed something, but I think this is normal. I am a quadruple checker, by the way, so I do my best to eliminate at least the really obvious stuff. Then I get other people to check it, this is also why I like peer-review because then another set of people (who ostensibly don’t know me) check it as well. I used to worry (as a PhD student) I might be wrong (shock, horror!). Until I learned that eing wrong is perfectly OK. Months or even years after we publish our results, someone might come up with some new data that proves me wrong, of even right. It is all part of the process nothing ventured nothing gained.
What I have been struck by recently is the myriad reports on scientific fraud. Actual fraud, not I accidentally missed something, but rather I am not getting the results I need so why not create them. Alok Jha wrote a wonderful piece about this in the Guardian this week about fraud in psychology provocatively entitled False positives: fraud and misconduct are threatening scientific research. If you look at the actual numbers he reports they are mostly, thankfully, relatively low. For instance a PLoS survey said 1.97% admitted to falsifying data (but those are only the ones that admitted it).
Fraudster stories are newsworthy because they are, well,shocking. But scientists are human and humans do sometimes make stuff up for a news story (remember the Gay Girl in Damascus Fraud?) or in a Scientist’s case a Nature paper. Why would anyone expect a big group of humans were all perfect? Scientists aren’t above the plane of normal human statistics.
What I worry about, with regard to fraudsters, is that these scientific fraud stories will be interpreted as ‘see scientists just make stuff up’ – where largely this is not the case in science – rather than the more realistic ‘sometimes people just make stuff up’ (like Johann Hari) which I think is a bit more to the point.