How oil companies are helping to combat global warming

I just saw this news that Statoil is to be allowed to drill holes in the bottom of the North Sea:


Sure many will be pleased to hear that Statoil have been permitted to conduct a huge development in the North Sea http://t.co/27sWylcF
@Protohedgehog
Jon Tennant

Now, oil companies (and that’s what Statoil is: it has nothing to do with statistics) are often vilified for being behind global warming, so it’s nice to see them trying to help solve the problem. I thought it was worth explaining how they’re going to do this: luckily last week I met an oil engineer who explained it all to me.
Continue reading

Posted in Friday Fun, Silliness | 1 Comment

The long and the short of papal reigns

If you’ve been following the news, or twitter, you’ll have noticed that the current pope, Pope Benedict XVI (pronounces Kss-vee) has decided to retire at the end of the month, to spend more time with his twitter account. Anyway, the Grauniad had an interactive thingy up, which, they suggested, “illustrates the idea that a long-serving pope is often followed by one whose papacy is much shorter”. Well, possibly. But you really need to do a bit more than stare at a fancy graphic. You actually need to do some analysis of the data.
Continue reading

Posted in Statistics | 15 Comments

A Brief Service Announcement

Sorry for not posting much, um, recently. This is partly because I’ve been busy, and also because I’ve been avoiding writing an obituary for The Beast, who died a few weeks before Christmas. In the words of Trillian, normal service will be resumed as soon as we work out what “normal” is.

Posted in Aaaaaagh | 6 Comments

Changing ecologists’ statistics to statistics about nature

Whilst my back was turned, I had another paper published online early. It’s rather embarrassing that I didn’t notice, because I’m an Executive Editor for the journal. The paper is, of course, superb (most of the work was done by Konstans, my co-author, not me). But it got me thinking a bit about some of the deeper issues.
Continue reading

Posted in Ecology, Statistics | 8 Comments

Prizes Straight from the Pits of Hell

This should make predicting this year’s science Nobels easier. Last month,
Republican congressman Paul Broun, who also happens to be a member of the US House of Representatives science committee, described evolution, the big bang theory and embryology as ‘lies straight from the pit hell':

Today one of the Nobel committees got their revenge, giving the Physiology or Medicine prize to work on embryology. Ha, take that!

So, coming up this week: prizes for the Big Bang tomorrow, and chemical evolution on Wednesday. Perhaps Dawkins will get the literature prize on Thursday, just to really wind up the US politicians.

Who, then for the Peace Prize? Any suggestions?

Posted in Silliness, Uncategorized | 8 Comments

Altmetrics: what’s the point?

A couple of weeks ago Stephen (of this parish) generated a lot of discussion when he complained about the journal impact factor (JIF). I must admit I feel a bit sorry for the JIF. It’s certainly not perfect, but it’s clear that a lot of problems aren’t with the statistic itself, but rather with the way it is used. Basically, people take it too seriously.

The JIF is used as a measure of quality of science: it’s use to assess people and departments, not journals. And this can affect funding decisions, and hence people’s careers. If we are to use a numerical metric to make judgements about the quality of science being done, then we need to be sure that it is actually measuring quality. The complaint is that the JIF doesn’t do a good job of this.

But a strong argument can be made that we do need some sort of measure of quality. There are times when we can’t judge a person or a department by reading all of their papers: a few years ago I applied for a job for which there was over 600 applicants. Imagine trying to read 3 or 4 papers from each applicant to get an idea about how good they are.

Enter the altmetrics community. They are arguing that they can replace the JIF with better, alternative metrics. They are trying to develop these metrics using new sources of information that can be used to measure scientific worth: online (and hence easily available) sources like twitter and facebook (OK, and also Mendeley and Zotero, which make more sense).

Now, I have a few concerns about altmetrics: they seems to be concentrating on using data that is easily accessible and which can be accumulated quickly, which suggests that they are interested in work which is quickly recognised as important. Ironically, one of the criticisms of the JIF is that it only has 2 year window, so down-grades subjects (like the ones I work in) which have a longer attention span.

But I also have a deeper concern, and one I haven’t seen discussed. It’s a problem that, if it is not solved, utterly undermines the altmetrics programme. It’s that we have no concrete idea what it is they are trying to measure.

The problem is that we want our metrics to capture some essence of the influence/impact/importance that a paper has on science, and also on the wider world. But what do we mean by “influence”? It a very vague concept, so how can we operationalise the concept? The JIF at does this by assuming that influence = number of citations. This has some logic, although it limits the concept of influence a lot. It also assumes that all citations are equal, irrespective of the reason for citation or where the citing paper is published. But in reality these things probably matter: being cited in an important paper is worth more than in a crappy paper that nobody is going to read.

But what about comparing a paper that has been cited once in a Very Important Paper to one that has been cited three times in more trivial works. Which one is more important? In other words, how do we weight number of citations against importance of where they are cited to measure influence?

I can’t see how we can even start to do this if we don’t have any operational definition of influence. Without that, how can we know whether any weighting is correct? Sure, we can produce some summary statistics, but if we don’t even know what we’re trying to measure, how can we begin to assess if we’re measuring it well?

I’ve sketched the problem in the context of citations, but it gets even worse when we look at altmetrics. How do we compare tweets, Facebook likes, Mendeley uploads etc? Are 20 tweets the same as one Mendeley upload? Again, how can we tell if we can’t even explicate what we are measuring?

If someone can explain how to do this, then great. But I’m sceptical that it’s even possible: I can’t see how to start. And if it isn’t possible, then what’s the point of developing altmetrics? Shouldn’t we just ditch all metrics and get on with judging scientific output more qualitatively?

Unfortunately, that’s not a realistic option: as I pointed out above, with the amount of science being done, we have to use some form of numerical summary, i.e. some sort of metric. So we’re stuck with the JIF or other metrics, and we can’t even decide if they’re any good.

Bugger.

Posted in The Society of Science, Uncategorized | 41 Comments

A new really old version of The Elements

I hope you’re all fans out Tom Lehrer (“Mr. Lehrer’s muse [is] not fettered by such inhibiting factors as taste.” – NYT, apparently). Well, via those Improbable Research (“Mr. Abraham’s muse [is] not fettered by such inhibiting factors as taste.” – NYT, possibly) comes a new old version of Lehrer’s classic The Elements:

The lyrics are below the fold.
Continue reading

Posted in Uncategorized | 3 Comments

Research with impact

After Stephen’s posts about impact factors and the like, I have a couple of serious posts brewing. But for now (and because it’s Friday), I want to admit to my reaction today to an advert I got about a journal, which told me that I should Stand Out in my Field, and Be Visible by submitting to this journal (run by a reputable publisher). One of the reasons for publishing there was that the journal has High Impact: it’s impact factor is 1.95.

“Pah!” I thought. I’m executive editor of a journal with an impact factor over 5. In fact 5.093 (that 3 at the end is terribly important, no matter what anyone tells you). Why would I want to publish is such a lowly journal?

On the other hand, the journal has published a paper about TARDIGRADES IN SPACE. That beats impact factors any day.

Posted in Science Publishing, Silliness | 9 Comments

lme4: destined to become stable through rounding?

(this would have appeared on my blog on Nature Network, but the pulled the plug the day before. Sometimes correlation does not mean causation)

Fans of R and mixed models are aware of the lme4 package. This started out as Doug Bates re-writing the lme package using the new capabilities in R (S4 objects, for those who care about such things). It goes back to at least 2006, but isn’t stable yet: a source of mild amusement for me over the last few years. In software development, an un-stable version has a number starting with 0 (e.g. 0.4), and once the developers are happy with it, it gets upgraded to v1.0. The core R developers released R1.0-0 on the 29th February 2000, citing it as the nerdiest date possible, being an exception to an exception.
Anyway, the version numbers of lme4 show the problems: v0.999375 was released in 2008. I just checked and the latest version is 0.999999-0-1. This is more compactly (and more confusingly) written as 1-1e06-0-1.
I have been worried that lme4 will never become stable, but this latest version mollifies me with the thought that the developers can’t go on forever, so eventually lme4 will become stable when the machine precision forces it to be rounded up to 1.0.

Posted in R, Silliness | Comments Off

Welcome to The Menagerie

In my first OT post I mentioned The Menagerie I live in. So, while GrrlScientist is attending to parts of it I thought I’d introduce some of the residents, including some of the shyer ones

ShrimpLaceleaf
Continue reading

Posted in Life in the Menagerie | 12 Comments