In which I tire of the old paradigms

Successful moments in scientific research are famously rare, and people deal with them in various personal ways. Many treat a promising experimental with suspicion bordering on paranoia, refusing to believe what is right before their eyes because an experiment couldn’t possibly have brought good tidings, could it have? Like a young swain disappointed in love one too many times, they harden their hearts against any glimmer of hope or joy.

But when something convincingly good happens to me in the lab, I’m the first one to jump up onto the lab bench and do a victory dance. There are far too many failures in my line of work not to celebrate a success, no matter how short-lived or misguided it may turn out to be. For me, the expression “don’t get your hopes up” is an imperative that goes against human nature. Will I really be less disappointed if something turns out not to be true if I don’t celebrate at the beginning?

No, actually, it will suck either way, so I reckon I might as well enjoy it while it lasts.

Recently, I ended up with a nice result on a minor problem I’ve been chipping away at for a few months now – the icing on the cake. Because I am a particularly sceptical scientist, I’d devised several ways of looking at the same question, had reproduced the experiments a number of times using different conditions and a large number of controls, and had also performed a few experiments to rule out the key formal possibilities. Everything looked as solid as anything ever can in this business. Feeling that irrepressible urge to share the love, I opened up Twitter and told the world that I’ve proved my hypothesis.

In amongst the shower of congratulatory and humorous quips that came back was one sour lemon: I can’t remember who sent it or what the exact wording was, but in essence the tweet chided me for saying I’d proved my hypothesis instead of that I’d disproved my null hypothesis.

Around the same time, I’d had a spectacular failure: a carefully nurtured multiwell plate of cells packed full of interesting questions flew off the microscope onto the floor and was ruined. When I tweeted that I’d had to bin an experiment because it hadn’t worked, someone tweeted (again, I can’t recall who or exactly how it was phrased) that there was no such thing as a failed experiment – all experiments should be designed to give an answer either no matter what the outcome and it was wrong of me to have claimed it had failed. When I explained that the cells had ended up on the floor, this person replied that I had still learned something: as near as I could understand, his stance was that I’d falsified the hypothesis that I could perform an experiment to completion without screwing it up.

Right. Now, I think that Karl Popper had some really interesting and important ideas, but – like Thomas Kuhn and others – I don’t believe that there is one single “scientific method”. And in particular, I don’t think that the vast majority of scientists use falsificationist methodologies in their everyday work, aside from its ghostly remnants in the way we calculate p values for statistical purposes. When was the last time, for example, that you saw the concluding line of an abstract stating, “Here, we disprove the notion that protein X is not involved in pathway Y”? What fascinates me is that I still encounter people who seem to think that science should be done, or conceptualized, this way. It might be a byproduct of education: after all, I clearly recall being taught Popperian methodology in high school biology, and for all I know it’s still being aired in classrooms.

I’d be interested to hear what others think about why this Popperian view remains so compelling to some scientists. Is it a talisman against the growing suspicion that our research methodology is hopelessly messy and subjective, and that we can never really discover the truth? Does it cast some illusion of control, some spell that might separate our personalities from our science? Would it, if rigorously applied, serve to stem our inappropriate hopes and desires for a favored hypothesis to be true?

About Jennifer Rohn

Scientist, novelist, rock chick
This entry was posted in Scientific method, The profession of science and tagged , , . Bookmark the permalink.

53 Responses to In which I tire of the old paradigms

  1. I feel your pain on this one 😉

    Popper and Kuhn are retrospective attempts to rationalise the way they think scientists should/do work but they are detached from the way scientists actually work. I don’t think we should ignore them but we should not be overwhelmed by them as we go off to prove our latest theory whilst trying not to screw anything up!

  2. Kav says:

    Although I acknowledge that Popper’s scientific method can be a good guide at time, very little of my work (which is observationally based) fits perfectly into the ‘falsification’ method. However, the cult of Popper is loud and vocal; adherence to Popper seems more important to some than actually trying to find out what is going on.

  3. Jenny says:

    I’d like to know why that cult persists amongst scientists, who should know better that the daily work they do – even those of us who deal with hypothesis-driven research – bears little resemblance to this framework.

  4. Jim says:

    Indeed, I am my own worst skeptic and editor 😉

    I always feel that attempts to stick staunchly to the Popperist null hypothesis just leads to confusion in science communication, where despite being scientifically rigorous (to a public who would wouldn’t appreciate the subtleties of scientific rigour anyway) pretty hard won facts come across as decidedly tenuous.

    So enjoy your little bench top victory jigs 😉

  5. Jenny says:

    As long as Health and Safety don’t find out about them.

  6. @stephenemoss says:

    Do people really still get so hung up on Popper? To me his ideas lend an interesting philosophical aspect to the world of experimentation, but do not seriously undermine our belief in hypothetical proof or disproof (albeit probabilistically rather than absolutely). If your protein X is involved in pathway Y, and you’ve shown this to be the case in 10 different cell lines, as well as in yeast, flies and mice, there is no need to resort to strings of double negatives to state your conclusion.

    And in some instances, despite Popper’s admonitory wagging finger, your results may lead to the development of new treatments for human disease. For example, the Popperian view might have insisted that just because vascular endothelial growth factor appears to stimulate blood vessel growth in various cell culture systems, there is no reason to assume that it works in the same way in humans. Yet Avastin, a humanised monoclonal antibody against this protein, does just that, and is widely used to extend the lives of patients with colon cancer.

    There’s an interesting blog by Chemiotics II on Popper and protein structure at http://luysii.wordpress.com/2010/08/08/a-chemical-gedanken-experiment/

  7. rpg says:

    Just make sure you wear safety goggles when you do.

  8. Jenny says:

    Thank for that link, Stephen – interesting stuff.

  9. Tom says:

    I can’t remember reading a paper in which the authors claimed they ‘proved’ their hypothesis – ‘shown’ or ‘demonstrated’ maybe, or ‘confirmed’ if they’re feeling sure of themselves, but to me all of those carry the unspoken rider ‘to a generally accepted level of certainty’. Most often the results are only admitted to have ‘indicated’ or even ‘suggested’ whatever-it-is. And this is in chemistry mind, not your messy ol’ biological stuff.

    Actually I do recall citing a paper in my PhD on a compound’s ‘synthesis and structure proof’. But that was from, like, 1956 (the paper, not the PhD) when I understand we scientists were a lot cockier about the whole being-the-masters-of-creation thing. So much for ‘old paradigms’.

  10. Jenny says:

    You’re right – I never use the word ‘prove’ in my papers. The most certainty I’ve ever expressed is, “Taken together, our data strongly suggest that…”

    No wonder we wind up the journalists. 🙂

  11. Alice Bell says:

    On method, or lack of it – have you read Feyerabend? Though personally, I can’t recommend the introduction to this book enough on the topic. The author basically argues that Popper is a resource for thinking about science which describes some science and some scientists apply but will only ever be a partial capturing of the process. Instead, he takes a more empirical track of looking at the ways science does, in action, get defined, often very instrumentally (the book then goes on to outline some lovely historical case studies). I’ve similarly been interested to see a form of applied Kuhnianism – where people try to be scientific or make an area scientific by putting it through a revolution, even though Kuhn built this as a way of thinking about science, not a prescription for it (even less normative than Popper)

    Re: truth conversation you seemed to be having with Sylvia on twitter (sorry if I missed something, scanned it) – I can’t remember who it was who (somewhat cheekily) said Popper was the first postmodernist… but there is an argument to be made that the idea of falsification is based on the idea that although there is a real-reality out there, we are never likely to find it. That Popperian science is a science of continually questions. We might argue that this was, for Popper, the power of science over what he saw as the pseudo-science of political ideology which claimed to have found the truth.

  12. Jenny says:

    I’ve heard of Feyerabend but have never read him. Thanks for the recommendation – though at first your ‘null hypothesis’-like construction (“I can’t recommend…”) had me confused how you felt about the book. 🙂

    When Popper is interpreted in the manner you describe in your last paragraph, then I actually agree with this stance. What gets me is the pedantic semantics crowd – the people who claim it’s wrong not to use the double-negative stance when reporting scientific results. It could be, as I think Sylvia was saying, that this is a misrepresentation of Popperian ideology.

  13. Thanks for the referral. There’s lots more stuff along this line in the blog. Look at the categories “Philosophical Issues Raised”, “Theological Implications of Simple Chemistry” for more. I’ve been accused of snaring the unwary into religion, but that’s not the point. There is a lot about our existence which is downright miraculous, and it’s hard for me to accept that it just happened. Take a look at the current post for what’s ‘just happening’ inside our neurons as we sit here and blog.

  14. Grant says:

    Perhaps one way of looking at it is that science as a whole operates closer to the falsification notion, in a way that individual work does not. (Perhaps only review papers, with their tidy ordering of things after the worse of the ‘mucking around’ is done presents something closer to the tidy falsification ideal?)

    Some individual work, after all, “merely” presents data pretty much ‘as is’. Others an hypothesis. Many (most) present ‘an argument for a case’, for which they offer support, rather than formal falsification, with later work from others adding competing tests.

  15. ricardipus says:

    I work in an area that has embraced the much-ballyhooed concept of “hypothesis-free science” – usually invoked in the context of genome-wide scans (or sequencing). The idea goes something like this: we are intentionally making no hypotheses about what in the genome might contribute to condition X, and are examining it in its entirety instead.

    I confess I haven’t checked if anyone else thinks this is nonsense, but I call BS on it – there’s an implicit hypothesis built in there anyway: that there is at least one genetic variant that contributes susceptibility to condition X (and, by extension, that it can be detected using the genome-wide approach being used).

    Hypotheses are everywhere, whether or not you feel they need to be falsified. But in many cases they’re kind of irrelevant to the matter at hand (like your Twitter contact’s idea that you could have hypothesized that you could complete the experiment without catastrophic plate juggling).

    Possibly, I am far off topic by now, but my brain has been slightly overcooked in dealing with the hypothesis that I will get this grant written and delivered today.

  16. Jenny says:

    I am quite fond of the hypothesis-driven approach – but you’re right, even when I was doing my fishing-expedition screen, the implicit hypothesis was that I was going to discover cool new genes. I think as long as people take pains to be objective, or at least to minimize bias (by, say, blinding their samples for tests susceptible to bias), and do all the proper controls, good scientific method is being performed.

  17. Stephen says:

    I have mixed feelings about Popper’s philosophy (as far as I understand it). I think it’s nice to have on in the background as a kind of mood music, ever reminding us of the contingent nature of the result of any experiment.

    But I tire easily of the hard Popperian view that hypotheses can only be falsified (as you did with your twitter interlocutor). On a day-to-day basis, it seems perfectly OK for me to test my little hypothesis and to be glad if it comes through. Even if one thinks of it as being “proved for now”, I think that is somehow preferable to considering it “not yet falsified”.

    There are some hypothesis that do seem to have solidified into a substance that seems hard to distinguish from scientific fact. Does anyone seriously doubt that hydrogen is the simplest naturally occurring element, that DNA is the stuff of heredity?

  18. Jenny says:

    This may be where Occam’s razor comes in. The simplest explanation is that hydrogen is the simplest element – there might be a simpler one (perhaps called something like flyingspaghettimonsterium) but if there is, we couldn’t know about it.

  19. I think the applicability of Popper depends, as others have said, the field, and also the hypothesis in question. There’s no doubt that the idea is an important one to be mindful of, but as a guiding principle it’s very negative and doesn’t inspire much creativity or drive.

    It seems to come down to semantics – most scientists would understand very well that their hypothesis isn’t ‘proven’, but from a functional perspective, it’s just too hard to outline the exact state of the knowledge when referring to an idea. “Proved that X is the case” is stronger than “Showed that X is the case” or “Suggested that X is the case”, and those terms substitute well for “X has been extensively corroborated by a rigorous regime of positive and negative attempts to verify it”.

  20. What Stephen said.

    Pace Alice (Bell), I think the problem is that some people (including dare I say some with a b/g in social sciences / “science studies”, but far more so people who “imbibe” these folk’s work, like politicians) do see in the theories of Popper / Kuhn / Feyerabend, and their later interpreters, a way to “design a better environment for discovery”.

    I might even suggest that some of the current obsession with “excellence” comes from a notion that “Kuhnian paradigm shifting” discoveries are the only ones that really matter – a view that clearly comes from the science studies arena, and a narrative of scientific discovery and progress as a process, at least as much as from the actual science practitioners.

  21. Stephen says:

    I take back my earlier comment because I have just discovered the element that precedes hydrogen in the periodic table. I have named it Notrogen: atomic number, zero; atomic mass, zero. Appears to be chemically inert.

  22. Steve Caplan says:

    Very relevant topic!

    I think it’s very difficult to choose the right balance between the “positive-optimistic-enthusiastic-proved-it-” attitude, and the “skeptical-negative-paranoid” approach. I feel that leaning too far to either side, at least on a consistent basis, leads to “burn-out”. I probably spend more time counseling students about how to strike the right middle ground than I do talking about actual science results.

  23. ricardipus says:

    Stephen has just falsified the Universe, methinks.

  24. Jenny says:

    Stephen, don’t tell the homeopaths about Notrogen or they might start looking for a better mechanism.

    Steve, I have mixed feelings about the notion of counseling students on their reactions. Sure, they need to be cheered up when things go awry, but do you really tell them not to be so happy in the opposite situation? In my view, reactions to science are almost always manic-depressive and everyone has to learn his or her own way to deal with successes and failures. But not embracing the good times when they last seems like too stern of a lesson for a young scientist.

  25. Roban says:

    One response to Popper is the idea that by using Bayesian probability updating you can make your analysis inductive, in a probabilistic sense. That is, you can justifiably make inferences about universal truths given necessarily limited observational data. Obviously we do this in practice, so the idea that there’s a formal mathematically-justified procedure for doing so is appealing, but also has it’s problems. This paper was thought-provoking for me:
    “Philosophy and the practice of Bayesian statistics,” Andrew Gelman, Cosma Rohilla Shalizi. The authors argue that even with Bayesian statistics, you still have to do something resembling classical hypothesis testing, even if only in an informal, graphical way, because your prior can never include every possible hypothesis. Similarly, new models/hypotheses/priors are constructed primarily in response to graphical exploration of the data, particularly where it isn’t well explained by existing models. So they’ve returned to a deductive, hypothesis-testing viewpoint, while rejecting Popper’s simplistic formulation, and acknowledging the importance of formulating new hypothesis through informal exploration of data.

  26. Jenny says:

    That’s really interesting, Roban. In fact I was talking to a colleague recently about how we’ve been dealing with our (incredibly subjective) phenotypic data – basically, how our cells look under the microscope. If you do the initial analysis completely blind, it’s sometimes hard to see anything for all the noise. So we’ve been looking at samples with full knowledge of what the conditions are, identifying what visual trends seem to be associated with individual conditions, and then after establishing these criteria, analyzing subsequent experiments in a blind fashion. It’s completely not by the book, but it really seems to help, and in the end seems reasonably objective.

  27. ricardipus says:

    One issue with Bayesian statistics, of course, is you need to come up with a sensible prior probability, which is impossible for many (most?) kinds of scientific experiment (Jenny’s cell phenotypes, for example).

    Perhaps I’m missing the point, though.

  28. cromercrox says:

    Back in the early days of transformed or pattern cladistics, in the early 1980s, palaeontology was a hothouse of Popperian falsification. Of course, mature disciplines like evolutionary biology can afford to dig deeply into such things, unlike molecular and cell biology, which is still basically stamp collecting and has no theory worth the candle.

    (runs away)

  29. rpg says:

    Come back, you coward.

    *Walks after Henry*

    *Catches up*

    No, I’m not arguing. I agree strongly that mol/cell biol have no theoretical underpinning, and that’s a big problem for scientists trying to make sense of anything today.

  30. cromercrox says:

    By the way, the title of your post evokes for me a poem by a bloke called Dowson, I think, which has repeating lines that approximate thus (working totally from memory)

    But I am sick and tired of a old passion
    I have loved thee Cynara! In my fashion.

    This poem contains one or two lines used elsewhere as book titles and such, e.g. ‘The Night Is Thine’.

  31. cromercrox says:

    You ae not arguing? Come over here and say that.

  32. I’m not sure I should get involved here, since DC’s rule #2 is “never trust anyone who used the word paradigm” (rule # 1 is “never trust anyone who used the word stakeholder”.

    Despite UCL’s statistics department having been a hotbed of Bayesians ever since Dennis Lindley’s time, I have never felt that there was any satisfactory solution to the problem of the unkown prior in experiemental work (in some clinical problems there are valid priors). I much prefer to use Ronald Fisher’s solution to the problem, dating from the 1930s, which is to ignore the prior and maximise the likelihood. The snag, in general, is that you need to know the distribution of your variable. That isn’t a problem for single molecule work because you data come as distributions.

    In a wider context, I have never been able to understand why people still use t-tests and the like rather than randomistion tests which are at least as powerful with fewer assumptions (and have the virtue of making the assumptions quite clear).

  33. Fascinating discussion, and I agree with both @Jenny and @Alice about the uses and limitations of Popper. It’s interesting to hear that @Austin suggest that the emphasis on ‘excellence’ comes from ‘science studies’ people. I think if you were to trace the term and the related concepts around concentration/selectivity and prioritisation you would find it in ‘science studies’ type papers – but those written by scientists and engineers like Polanyi, Weinberg and Ziman (i.e. whilst ‘science policy’ was whatever eminent scientists thought and before ‘science studies’ became a field of social science research).

  34. Jenny says:

    You can still trust me, David – my use of the dreaded p-word was tongue-in-cheek.

    I don’t understand the comments of rpg and cromacrox suggesting that cell/molecular biology has no theories/is just stamp collecting. Of course you start with an observation, but once you have a pathway or functional module to play with, there are many hypotheses that can be tested. Or are we talking about Theories and Hypotheses with a capital T and H?

  35. cromercrox says:

    Yes.

    Back in the day, you’d come across what looked like a perfectly innocuous paper describing a newly discovered species of fossil fish, together with a systematic revision of the genus, or family, or whatever. But dip more than a toe into it, and it would be a fulminating brew of invective on Hypotheses and Theory (in capitals), Popper, Kuhn and Falsification.

  36. Tideliar says:

    My take on teaching Popperian paradigms is that we need students to understand the basics of hypothesis building and falisification. The underlying tenets of “The Scientific Method” as it were. What they then learn through ‘doing’ science, is that there is actually no damned thing. But without that grounding in “how” to do an experiment, it’s easy to go off base, miss the idea of controls, etc.

    Biochem Belle has something on this on her blog too

  37. Jenny says:

    I don’t buy it, Tideliar. The concept of controls is completely separate from the concept of falsification. You can perform a serviceable scientific method without falsification at all: I propose that X is true. I set up well-controlled experiments. I take together the supportive data and conclude that my hypothesis is well-supported. I have not “proven” it 100%, but I never claimed I could. End of story.

    Or am I missing something?

    Why bother teaching kids something that is seldom used and isn’t that helpful?

  38. Ian the EM guy says:

    Microscopy is always a tightrope walking act in this respect, particularly electron microscopy where your sample size in terms of cell numbers is naturally very small indeed. We always question whether we are biasing towards extreme examples, are we missing subtle phenotypes, are we just taking the eyecatching pictures? Rest assured the methodology you describe below is one that we frequently use here in the EM unit. We have the advantage I suppose that as we always work collaboratively with the people running the project, so we can always get them to look at the resulting pictures or even samples on the scope themselves in a blind way to see if they agree with our opinions.

    ps the back electron microscope room is an excellently private place to do your little victory dances. I do my albeit infrequent victory dances there, and I can highly recommend it. Feel free to come down and use it any time you like!

    So we’ve been looking at samples with full knowledge of what the conditions are, identifying what visual trends seem to be associated with individual conditions, and then after establishing these criteria, analyzing subsequent experiments in a blind fashion. It’s completely not by the book, but it really seems to help, and in the end seems reasonably objective.

  39. Jenny says:

    Never mind the dancing, Ian – is there room to nap down there? The confocal room is too cold.

  40. Ian the EM guy says:

    Let’s put it this way, you wouldn’t be the first to go there for a nap! 😉

  41. Richfrommich says:

    My favourite experience with Popper was watching one speaker destroy the previous speaker at a symposium. The previous speaker defended himself by replying that, because the second speaker’s objections were so obviously correct, they could not possibly be falsified and therefore were unscientific.

  42. Jenny says:

    I hope he got a laugh at least!

  43. steve caplan says:

    Jenny,

    I should probably rephrase that–I don’t counsel them not to celebrate or be too happy–you are correct. But I do feel the need to counsel them to prepare for the fact that the next step may not continue to bring forth the hoped-for results. You are absolutely correct that this is a manic-depressive lifestyle, with one step forward, and often two going backwards. At least for myself I’ve always felt that giving in unrestrainedly to optimism is likely to wind up in my being spent of adrenaline and “out for the count”. But this may well vary dramatically from person to person.

  44. ricardipus says:

    Thank you David for making me feel a little more confident in my comment about priors, since I am certainly no statistician.

    The biggest issue with randomisation/permutation is compute time and capacity, isn’t it? Large-scale permutation (say, of genome-wide microarray data) is very, very computationally intense. I imagine people like avoiding this by using more straightforward T-tests and the like.

    That said, T-tests get used and abused for all kinds of inappropriate things – comparing distributions that are clearly non-normal in shape jumps to mind (or, I suppose, distributions where the observations are sparse enough that it’s either impossible to prove they’re from a normal-ish distribution, or not worth trying). Amazing how many people know about T-tests, but not about their non-parametric cousins (even to the “just enough to be dangerous” level of understanding I have).

    Probably way off topic now, sorry Jenny.

  45. steve caplan says:

    @Jenny and @Roban
    Just adding to Jenny’s point–also regarding visual microscopy observations: at least for these phenotypic-based expts., I find that when someone from the lab goes to the scope with a given hypothesis–ie.) under condition X we expect to see Y, then regardless of what happens to Y, if there are changes in Z, A or B, people are unlikely to make that observation. Unless they are very experienced, or the effect is so dramatic that it really stands out. It seems to me that our brains are programmed to deal with looking for a single visual phenotype at a time.

    For this reason, if the expt. “doesn’t work” and X has no effect on Y, I am usually adamant about students looking for other effects, or at least having a peek myself so as not to miss something important. I think that physicians have a term for this–something about elephants or zebras–as in one is missing the obvious because of preconceived notions.

  46. Grant says:

    Don’t ask me why, but this side-line in the conversation reminds me that the library in my college from my Ph.D. days had those big (Queen Anne?) chairs with side head rests.

    It used to be a lark to come in and find half a dozen Ph.D. students in various stages of stupor or completely zonked out. (Particular after a round or two in the college bar…)

  47. Torbjörn Larsson, OM says:

    “I clearly recall being taught Popperian methodology in high school biology, and for all I know it’s still being aired in classrooms.”

    I should have been so lucky. As a physicist it took years before I even heard of testing outside of statistics. Granted, it may be a different age and culture.

    Anyway, I don’t see the fuss. Clearly falsification is only applicable as an ideal, but when it is available it is the most robust. It is mostly easier to see why and when something doesn’t work.

    So since we can be fairly sure to know what is wrong, we can eventually know what is right. As Stephen says, when we whittle away enough contenders and passes enough tests, eventually theory pass to robust fact.

    In principle no other theory of science can do that without taking on board unquantifiable risk. Further, that falsification passes a falsification test brings forth its consistency. I don’t know any other theory that can go meta on its own ass. Usually they point out that “we can’t know” such as when induction on induction shows that there will always be room for exceptions or even reversals of trends.

    The complaint that it isn’t used in practice seems to me like a complaint that “we use Newton’s equations for the daily work and not relativity – so maybe we should rethink discovering and accepting relativity in the first place?”

    (But really I think it is more useful than relativity. Take controls for example as you mention, without testing you wouldn’t *need* an alternate hypothesis, every effect would be “true”. I have even seen it used in biology, where theories of abiogenesis have been *tested* on derived predictions, instead of being just proposed and having gathered confirming indications.)

  48. I had a PhD student who regularly used to doze off mid-afternoon in the small dark room where we did our calcium and pH imaging experiments. She wasn’t by any means the only person to doze off in there, but she was a repeat offender. As a lab joke we stared keeping a small camera handy, and we would then take a snapshot of anyone who fell asleep for a “Lab Snoozers Gallery”.

    PS My student overcame her tendency to afternoon sleepiness and is now a Teaching-specialist Lecturer (cf a permanent Instructor post, in US parlance).

  49. Cath@VWXYNot? says:

    Ricardipus, this reminded me of my entry into Anthony Fejes’ genomics haiku contest!

    (Shiny new gadgets!
    Hypothesis? Function? Nah –
    Let’s just sequence it!)

    BTW, a genome centre director of our mutual acquaintance told me last week that he’s decided to finally start listening to grant reviewers who demand proper hypotheses 🙂

  50. Cath@VWXYNot? says:

    My ex-boyfriend (a fellow grad student from another lab in my department) and I once discussed how our very different prior research experiences caused us to have completely opposite null hypotheses when it came to our expectations of experiments working. I had an atrocious honours project research experience, with an absent supervisor and not even any good negative data (I wanted to title my dissertation “why my cells died”, but no-one would let me), and so I assumed that no experiment would ever work, and was delighted when one did. My then-boyfriend, on the other hand, had been handed a well-designed and successful honours research project, and so assumed that all experiments would always yield a technical success, at the very least. He was therefore terribly disappointed when things didn’t work.

    Interestingly, we had roughly equivalent experimental success rates in our PhD projects (once corrected for the fact that I was a year behind him), but I was much happier about it.

  51. TJ says:

    I came across this discussion while researching Popper. I am not a scientist, but a police detective and I try to use the concept of falsification as a way of avoiding bias in violent crime investigations. I find developing a null-hypothesis is useful, though sometimes difficult. It also directs me to find evidence that I might otherwise miss and is useful when questioning suspects because I sound more fair minded when discussing the evidence.

  52. Matt C. says:

    Popper versus Wittgenstein on Truth, Necessity, and Scientific Hypotheses
    Victor Rodych
    http://www.springerlink.com/content/p5jk3w8933034154/

    Popper had nothing that interesting to say except when he bashed Plato. On the other hand, a critical component to high quality results are controls. You can’t just pull reliable causality out of your rear. Science has run into certain limits, but we continue on. The problem is that things are getting more speculative. I’m not that comfortable with the sloppy thinking that is called science today, but there always is an element of probability. I still think causation is important, even if we don’t say it’s absolute. Science is not just a bunch of people running around in lab coats. We are not just looking at what is in front of us. That’s obvious. We are looking for how they got that way.

  53. MGG says:

    “when you hear hoof-beats think of horses, not zebras”, but we are taught all about zebras so that when we don’t see horses you are allowed to look for zebras???

Comments are closed.