Most formal statistical training for ecologists is terrible, so you’re not alone. Questioning the assumptions is important, especially if you start looking for the answers (and not in Sokal & Rohlf!). I think all applied statisticians have horror stories they can tell.

]]>My bad. Started reading below the line…

]]>On the whole, I think the discussion of “The Decision” has more value that the decision itself: it might bring some people to think about statistics, again.

To give you an idea: I’m a biologist with a terrible formal statistical training. What I think I know I learned by myself. I think I’m really terrible at doing and interpreting stats myself – but I’m often interacting with people who do high-impact stuff who actually think I’m better at stats than they are. This is mainly because I keep questioning their statistical assumptions, and I tried out a *lot* of different stuff (to no avail with my data, sadly). People just don’t do this. They joke about p-values, but often don’t even check if a method should only be applied if $assumption is met.

Sad, but true.

]]>You didn’t read the blog post very carefully. In point 1 of my suggestions I specifically suggest using standard errors.

]]>The estimator is not the estimand, but hopefully it is a good ~~estimator~~ ~~fudge~~ ~~guestimate~~ hash of it

Even if we were all to convert to Bayesianism, we’re not all going to magically find informative priors where none existed before (if they allow uninformative priors, won’t we simply end up with the same estimate as under a frequentist framework?). And I’m not convinced this policy is going to allow researchers to develop these informative priors either.

Dunderheids.

]]>