Mr and Mrs Duct, and their daughter, Miss Con

A bit of a kerfuffle in last week’s Nature. Turns out that we practising scientists are all making it up, or at least fiddling Western blots to make things look better than they are. According to some learnéd0 commentators (who is that Liu person anyway? Why does s/he get up my nose so?) we’re all so stressed by pressure to publish that we’ll make up any old shit and send it to Nature or JCB, who are so pleased to get anything that it gets published.

Sorry, I’m a little aggrieved by the implications here.

Only last week I took a good, hard look at my young apprentice’s data, compared it with the new data I had just generated, and announced in Churchillian tones,

“The hypothesis is dead. Long live the hypothesis!”

We had such a beautiful theory. It just worked. It made sense. It would have helped a freshly-minted PhD to a Nature paper. But I went into the boss’s office with the new data and said, “This is a dead hypothesis.” And we sat down and talked about it, and I went to talk with my young apprentice, and we were all ever so slightly disappointed for a while (because, let’s face it, we’re not always rational) and then I announced that the current data are consistent with not one but two new hypotheses.

And they are eminently testable.

Rejoice! This is science. And it works.

0 Sarcasm. It’s quite obvious most of the loudmouths haven’t been near a lab in their lives.

About rpg

Scientist, poet, gadfly
This entry was posted in Uncategorized. Bookmark the permalink.

30 Responses to Mr and Mrs Duct, and their daughter, Miss Con

  1. Matt Brown says:

    And we’ve got a discussion thread about this here.

  2. Bob O'Hara says:

    So, you’re nice and honest (or at least honest), but is everyone? It seems reasonable that scientists, being humans, will have some bad eggs in their number. I think we have to accept that they do exist, and make sure we have the structures in place to deal with them.
    Most importantly, though; can you explain why anyone would want to send their paper to a mechanical digger? Whilst it may be appropriate in many cases, it does seem to be an extreme form of self-criticism.

  3. Richard P. Grant says:

    About the misconduct, yes. But not about that poor, helpless hypothesis, lying slain in the cold Australian snow.
    [wait a minute… —Ed]

  4. Richard P. Grant says:

    Actually Bob, it’s a real wrench to let go of these things. But it feels so much better when you do. Of course there are bad eggs, but the point is they get found out. Sooner or later.

  5. Maxine Clarke says:

    Ah, yes, Dr Liu. A longstanding correspondent.

  6. Richard P. Grant says:

    Diplomatically put, Maxine.

  7. Scott Keir says:

    This is science. And it works.
    I have that tshirt. But it is far too rude for me to wear anywhere. But I wear it in private.

  8. Maxine Clarke says:

    I’m talking about many, many years, Richard.
    Sigh.
    Online commenting facilities are meat and drink to a certain type of outlier, and I think this provides much food for thought in idealistic plans for “openness of discussion” (a process to which I enthusiastically subscribe in principle).

  9. Jennifer Rohn says:

    Richard, I think you’ve hit on the key point here: the truth will out. There’s a discussion on the LabLit forums right now that basically boils down to the following: why should anyone risk their careers grassing up a colleague when any flawed work will ultimately fail to be reproduced and will go the way of all duff hypotheses? Fraud is a terrible thing, but in the long run, I do wonder how much harm it can do to a robust system. This does not mean that I don’t think it’s a good idea to address the pressures that make researchers desperate enough to do anything so stupid. But ultimately – the system self-corrects.

  10. Martin Fenner says:

    I like to disagree with the statement that the truth will come out eventually and that the system self-corrects. This is like saying we shouldn’t worry about doping in sports because it will be discovered sooner or later anyway. Seriously, fraud and dishonesty will always be part of every competitve environment, being it professional sports, performing arts or science.
    And we just recently had the case of a paper retraction after another research group had wasted many resources trying to reproduce the paper. I think the issue here is rather whether scientific misconduct is so common that we should do more about it (as the article implies) or whether the current system works good enough that nothing needs to change.
    In my personal experience the pressure to publish creates a lot of behaviour that I would not call misconduct or fraud, but unsportsmanlike. Something that creates a bad feeling, but isn’t worth reporting.

  11. Henry Gee says:

    Ah, Martin, but the whole notion of sportsmanship comes from the milieu of the gentleman amateur. As soon as people turn professional and money is involved, notions of sportsmanship fly out of the window.
    I’m with Jenny and Richard – in the end, the truth will out, but one must be aware that the time when truth emerges might be decades or even centuries away, and in the meantime careers might be thwarted, fortunes made and lost – things that matter to us now, but which are but motes in the eye of Brahma.

  12. Richard P. Grant says:

    I think, to be honest, that comparing science with sport is totally meretricious. The point of one is the antithesis of the other.
    Sorry Martin.
    We’ve had this discussion elsewhere on NN, haven’t we? There is a ‘healthy’ amount of competition necessary to the scientific endeavour, but the behaviour we are observing tells us there is indeed something rotten in the stat of Denmark.

  13. Jennifer Rohn says:

    And it’s not mainly fraudulent results that people might spend years fruitlessly trying to reproduce to no avail. Probably a lot of the legitimately acquired scientific record falls under that category. The system has to be robust and self-correct, because the need for such is never going to go away even if all fraud is eliminated.
    The truth will still out.

  14. Richard P. Grant says:

    I actually find myself in that position — of trying to figure out if the previous work is actually worth anything.
    If something is published, and you later found out the experiment showed that something just once in 15 attempts, is that discovering fraud? If a fluorescence figure in a paper does not actually show what the authors claim (and that claim then passes into scientific lore as fact) then is that fraud?
    I don’t believe so. They are mistakes, through inexperience maybe, and although it’s bloody frustrating to have to make sure your own data are twice as secure before you can overturn the errors, that’s no reason to condemn the whole enterprise.

  15. Martin Fenner says:

    It’s always the same discussion, being it duplicate papers, ghost authorship or now fraud. And I believe that we need mechanisms to keep these behaviours in check.
    But I can hardly argue against someone who uses words like meretricious.

  16. Eva Amsen says:

    I was actually impressed when someone in my lab was asked to submit original films from a Western Blot, because she cut out two lanes in the middle of the scanned image (the experiment was originally bigger, then part of it got left out of the final manuscript, no time to redo gels, etc.)
    She had to send in the films, and someone had a look at them to make sure the image in the paper really only had lanes cut out (and not half of the figure changed in exposure or whatever they thought she did). This was JCB.

  17. Richard P. Grant says:

    bq. And I believe that we need mechanisms to keep these behaviours in check.
    I think that’s addressing the symptoms, not the cause.
    Well, maybe. There’ll always be the dishonest, and unless you make it so difficult for everyone to do anything then there’ll always be those who cheat. But by encouraging the vulnerable to be honest, rather than rewarding dishonesty (which is what happens in this environment) we might do some good.
    Ultimately of course, there are too many scientists for the amount of money available.

  18. Boris Cvek says:

    Many of fraudulent articles will be earlier forgotten than convicted. Such “scientists” get money without work and enjoy the money quite safely. It is optimal, for such reason, to publish the “results” in some of “stuffy” journals.

  19. Richard P. Grant says:

    I’m going to have to ask to see your data, Boris.

  20. Maxine Clarke says:

    I tend more to the Martin line, I have to say. We should not be complacent about these things. We would not be happy about other professions being complacent, if the newspapers, media et al. are to be believed.

  21. Richard P. Grant says:

    I don’t think anyone in this thread is actually being complacent, Maxine. I think Henry and Jennifer feel pretty strongly about scientific fraud. I know I do.

  22. Henry Gee says:

    I sure do. Scientific fraudsters need chuchin’ up. Otherwise they should be taken out and shot, before being run out of town on a rail and fed to the chickens. OK, I was joking. I’d never do that to a chicken.

  23. Richard P. Grant says:

    What would you do to a ch—
    no. Don’t answer that. I don’t think mere mortals were supposed to possess such knowledge.

  24. Peter Aleff says:

    As an outsider, I am touched by the many assertions that science is self-correcting and by the unproven and unprovable article of faith that “the truth will still out”. The promoters of this mantra may be right in the very long term, as when “The Crime of Claudius Ptolemy” (Robert R. Newton, Johns Hopkins University Press, 1977) of plagiarism and data faking gets exposed two millenia later, or when Galileo Galileo’s father showed that Pythagoras had lied 2000 years earlier about one of his core findings of simple arithmetical ratios in the string lengths producing tones, instead of the squares of those ratios (Jamie James, The Music of the Spheres, Copernicus, Springer Verlag, 1993, pages 92-93).
    For shorter time frames, the evidence shows that some branches of science, particularly medicine, exhibit much stronger wagon-circling mechanisms and biases that tend to enshrine errors and frauds because admitting them would embarrass the profession and expose some of its members to potential liabilities.
    For instance, fifty years ago, many medical doctors in America were still steeped in the by then already widely discredited
    pseudoscience of eugenics. When they were faced with a new epidemic of blinding among premature babies that began with the introduction of eye-damaging fluorescent lamps, they blamed “bad germ plasm” for the problem and decided to eliminate it by “not preserving” the “defective persons” who carried it.
    The most potent means of preserving preemies had long been to give them supplemental oxygen, so these highly respected “scientists” rigged a big show trial in which they withheld this breathing help from all preemies for their first two days to reliably kill off the most vulnerable among them who were at the greatest risk for the blindness.
    Then these doctors, trying to work for the “greater good” of eliminating the bad germ plasm, enrolled only the survivors in their study and proclaimed with great authority and fanfare the predictable result that the incidence of blindness had been drastically reduced among their study population. They glossed over the mass killing they had caused to intentionally bias their sample and presented oxygen withholding as an effective and harmless prevention.
    Despite the much touted need for constant verification in science, their never replicated and experience-contradicting bogus study became instantly the cornerstone of modern intensive care nursery procedures.
    This long exposed but still denied fraud continues today to harm many babies for those doctors’ idea of the “greater good” even though their “bad germ plasm” theory and all their eugenics have long been debunked, and none of today’s doctors can refute the blinding role of the fluorescent nursery lights, although some tried with another transparently rigged but prominently published study. (See the never refuted documentation of this case plus further related frauds and their official cover-ups at
    retinopathyofprematurity.org/01summary.htm.)
    Yet, promoters of science like to close their eyes to the clear evidence for such frauds and continue to claim that an alleged “self-correcting”
    mechanism works in their field.
    As long as such blatant frauds continue to be ignored, denied, and covered up by an evidence-adverse and self-defending scientific community, none of its members should lull themselves with unexamined pronouncements about “self-correction”, or parade their profession’s “scientific ethics” with pious lip service to an alleged ideal their community keeps trampling.
    Respectfully submitted,
    Peter Aleff

  25. Pedro Beltrao says:

    In the long turn there must be some self-correction since it is impossible to develop upon incorrect ideas. Unfortunately there is way too many people suffering either because of fraud or excessive competition. Underlying both is a very strong pressure to publish that is unfortunately not at all directly proportional to the work/effort put in. We do need a bit of competition to have an efficient distribution of resources but the current selection procedures are not very good for the amount of people that have to be evaluated.
    A few possible solutions would be:
    – more resources or less people going into science
    – less focus on the journal IF and more on papers citations or merit evaluated by peers
    – breaking down scientific contributions to even smaller size and allowing citations of these units
    – transparent research agendas that anyone could join

  26. Pedro Beltrao says:

    Upd, I did not mean to strike those out.
    A few possible solutions would be:
    1) more resources or less people going into science
    2)less focus on the journal IF and more on papers citations or merit evaluated by peers
    3)breaking down scientific contributions to even smaller size and allowing citations of these units
    4)transparent research agendas that anyone could join

  27. Boris Cvek says:

    Richard: I’m going to have to ask to see your data, Boris.
    Boris: 🙂 My best is not based on my data, but on creating new frameworks for and by the published data of others… I think the “post-data” era in biology is coming .

  28. Boris Cvek says:

    There is the piece of my work)&_cdi=5020&_sort=d&_docanchor=&_ct=25&_acct=C000049942&_version=1&_urlVersion=0&_userid=2925126&md5=d9c34f2b0fc735d10cec0d76b789bdfb on Drug Discovery Today website. I think that is the first biologically-oriented article citing Thomas Kuhn´s work :-).

  29. Maxine Clarke says:

    By “we” I meant those various groups that make up “the scientific community”: researchers, regulators, journals, funders, employers, etc — I wasn’t meaning to imply you and Jenny by use of that word, Richard, of course you take these matters seriously. Apologies from me if I have given the wrong impression.
    What I meant to convey was that based on my experience of many years of receiving scientists’ letters about technical problems with each others’ published work, I do not think it can be right to assume that the system will self-correct, as Martin also writes.
    The News and Opinion forum has one stark example here, but there are other comments in that thread also. (The person I refer to did not mean to be anonymous, I think, ie he/she is not deliberately taking refuge in anonymity).
    Richard, you give an example of a “mistake”, the 1 out of 15 attempts. Surely this kind of thing can be avoided by full description of the methods: sample size and so on? Journals can help by providing detailed advice, and also scientists need, and should have, proper training in how to present data in a statistically robust way.
    Eva’s example is a very good one. J Cell Biol does indeed check every figure it publishes. The Nature journals operate a spot-check system. “Beautification” of images was becoming a serious issue, partly because younger researchers found it hard to know the limits of digital manipulation of images, and their older supervisors etc did not understand the technology. That gap has led to real problems over faked data. You can see here for more details, including several editorials about this practice, all free to access.

  30. Richard P. Grant says:

    No apology necessary Maxine.
    In my experience a lot of the ‘searching for the perfect image’ is not because that one shows something different, rather that it shows the same, but on a gel with the lanes in the right order, the western that is correctly exposed, etc. I think I was brought up properly in this regard: one does not cut and paste, one runs the gel again until it the data are displayed conveniently.
    So yes, I show my ‘best’ gels, but in the producing of them I’m actually adding robustness because I’m repeating the experiment. If I get a different answer, well, then that’s a different matter.
    This one in 15 business. I’m a little leery of giving too many details, but part of me is thinking ‘sod it. I spent the best part of a year trying to reproduce it, so I don’t care who reads this’.
    There was an effect. The effect was demonstrated by a band on an agarose gel. The way I think it happened is that the theory was formulated that this effect should be expected. So the experiment was repeated until the effect was seen. The authors claim a dose-response effect on this particular gel, but not only (I’ve just realized) is there no justification for claiming an increase in dose, there is no requirement in journals that such experiments be repeated a number of times. So they tried the experiment, it seems to me, multiple times and stopped when it ‘worked’.
    Yes, I’m rather pissed off about it, because my first task in this lab was to perform the experiment and confirm the finding. How the hell can I publish that I’ve been unable to do so? The obvious criticism is that I’m crap at science. (Let’s forget that the mechanism that was proposed just Does Not Work, as far as I can tell).
    However, I am now, two years later, gathering strong, positive data that makes mechanistic sense and is in opposition to that 1 in 15 result.

Comments are closed.