On the Evolution of Camouflage in Urban Environments

One of my offices is in the Geozentrum in the Goethe University here in Frankfurt. This morning this was on the side of the building:
Today's Mystery Bird
I only saw it because it was flapping about a bit in the wind. If you can’t ID it, then here’s a cropped photo:
Continue reading

Posted in Silliness | 5 Comments

Making reviewing boring stuff less boring

Over at the Scholarly Kitchen, everyone’s favourite source material for winding up OA advocates, Phil Davis asked about something only tangentially related: Do Uninteresting Papers Really Need Peer Review?

In it he lays out a view that is perhaps selfish, but understandably so. He outlines why he only agrees to review few papers, and what sort he will review:

For me to accept an invitation to review, a paper has to report novel and interesting results. If it has been circulated as a preprint on arXiv, then I don’t benefit from seeing it a second time as a reviewer. Similarly, the paper must also pique my interest in some way. Reviewing a paper that is reporting well-known facts (like documenting the growth of open access journals, for instance) is just plain boring. Test a new hypothesis, apply a new analytical technique to old data, or connect two disparate fields, and you’ll get my attention and my time.

The only other category of manuscripts that I’ll accept for review are those that are so biased or fatally flawed that it would be a disservice to the journal or to the community to allow them to be published. These papers must really have the potential to do harm (by distorting the literature or making a mockery of the journal) for me to review them.

Which; I guess, means he’s only interested in reviewing a small percentage of papers that come his way. As a journal editor, I find this attitude worrying, but as a potential reviewer, I understand it perfectly: there is only so much time in the day, so I don’t really want to spend it reading boring manuscripts (a plea: if you find yourself wanting to not review a manuscript, please suggest another victim, e.g. a post-doc or senior grad student. That really helps me, and whoever gets to review gets experience).

Davis’ solution to the problem is to suggest that a boring paper doesn’t need to be reviewed fully:

Perhaps all that is needed is to send null and confirmational results through perfunctory editorial review. These articles may only require passing a checklist of required elements before being published in a timely fashion. The result may be a cheaper and faster route to publication, and for some kinds of publications, this is exactly the desired outcome.

As an editor, my reaction to this is “eek”. In the comments, Kent Anderson lays out an important complication:

The purpose of peer review is both to improve the paper and to help it find the right outlet.

But let’s not kid ourselves — not every paper needs to be peer reviewed as rigorously as some, and there is no single thing called “peer review.”

Finding the right outlet is something that editors can certainly help with through a perfunctory review. At Methods in Ecology & Evolution, we receive quite a few papers that are not suited to us, and I try to suggest alternative outlets. But improving a paper is something that often needs more time: someone who knows the areas has to read a paper more carefully, and look for areas where it can be improved. As an editor, I often don’t have enough knowledge about these areas, which is why we ask reviewers.

But I am also sympathetic towards reviewers who agree to review a paper that turns out to be boring papers. The sorts of papers Davis is discussing are ones that are boring because they replicate other work, without adding much novel. So they are a priori identifiable as not terribly exciting, but worthy, and this should influence the choice of which journal the paper is sent to. In particular, somewhere like the Journal of Negative Results – Ecology & Evolutionary Biology might be a typical choice. But given that we already know such a paper is going to be difficult to find reviewers, and is also not going to have a huge impact, can we not try to make them more palatable for readers, without compromising on the reporting of the work? My feeling is soo 2008: Yes, we can.

Most papers we receive at JNR are written like standard papers: an introduction that lays out the problem and the literature, the methods describing a sanitised version of what was done, the results, and finally a discussion explaining why this work is so important that Nature should have accepted it at once. My feeling is that we could improve the readability of papers if we cut down the introduction and discussion massively. This is especially so when the paper is a replication: the intro could be pretty much “We repeat the seminarsemolina work of Gee & Grant, but in the Greater Rumple Horned Snorkack. Read their paper for why this is so interesting”. And the discussion could be similarly brief: “We found similar results, but with a stronger effect of Ewok urine. This might be because of the smaller size of the Greater Rumple Horned Snorkack.” There is probably no need to explain why this is important for global warming: if it was that important, either someone else will have already said it, or you would be publishing the work in a journal with a higher impact factor, like PLOS One.

Would this help? I guess if we told potential reviewers the word count, it might. It also might help to get more negative results published, by making the barrier to writing them up lower. Would losing the text harm science? The only harm I can see is that students writing up negative results might not learn to put their work in context, but one would hope that not all of thesis is like this, and if that was seen as a problem, extra discussion could be added to the thesis rather than the paper.

Is this something we should try at JNR-EEB? And for those of you not in ecology or evolutionary biology, does this sound familiar, or is the tedium of the discussion section one only we face?

Posted in Uncategorized | 20 Comments

How oil companies are helping to combat global warming

I just saw this news that Statoil is to be allowed to drill holes in the bottom of the North Sea:


Sure many will be pleased to hear that Statoil have been permitted to conduct a huge development in the North Sea http://t.co/27sWylcF
@Protohedgehog
Jon Tennant

Now, oil companies (and that’s what Statoil is: it has nothing to do with statistics) are often vilified for being behind global warming, so it’s nice to see them trying to help solve the problem. I thought it was worth explaining how they’re going to do this: luckily last week I met an oil engineer who explained it all to me.
Continue reading

Posted in Friday Fun, Silliness | 1 Comment

The long and the short of papal reigns

If you’ve been following the news, or twitter, you’ll have noticed that the current pope, Pope Benedict XVI (pronounces Kss-vee) has decided to retire at the end of the month, to spend more time with his twitter account. Anyway, the Grauniad had an interactive thingy up, which, they suggested, “illustrates the idea that a long-serving pope is often followed by one whose papacy is much shorter”. Well, possibly. But you really need to do a bit more than stare at a fancy graphic. You actually need to do some analysis of the data.
Continue reading

Posted in Statistics | 15 Comments

A Brief Service Announcement

Sorry for not posting much, um, recently. This is partly because I’ve been busy, and also because I’ve been avoiding writing an obituary for The Beast, who died a few weeks before Christmas. In the words of Trillian, normal service will be resumed as soon as we work out what “normal” is.

Posted in Aaaaaagh | 6 Comments

Changing ecologists’ statistics to statistics about nature

Whilst my back was turned, I had another paper published online early. It’s rather embarrassing that I didn’t notice, because I’m an Executive Editor for the journal. The paper is, of course, superb (most of the work was done by Konstans, my co-author, not me). But it got me thinking a bit about some of the deeper issues.
Continue reading

Posted in Ecology, Statistics | 8 Comments

Prizes Straight from the Pits of Hell

This should make predicting this year’s science Nobels easier. Last month,
Republican congressman Paul Broun, who also happens to be a member of the US House of Representatives science committee, described evolution, the big bang theory and embryology as ‘lies straight from the pit hell':

Today one of the Nobel committees got their revenge, giving the Physiology or Medicine prize to work on embryology. Ha, take that!

So, coming up this week: prizes for the Big Bang tomorrow, and chemical evolution on Wednesday. Perhaps Dawkins will get the literature prize on Thursday, just to really wind up the US politicians.

Who, then for the Peace Prize? Any suggestions?

Posted in Silliness, Uncategorized | 8 Comments

Altmetrics: what’s the point?

A couple of weeks ago Stephen (of this parish) generated a lot of discussion when he complained about the journal impact factor (JIF). I must admit I feel a bit sorry for the JIF. It’s certainly not perfect, but it’s clear that a lot of problems aren’t with the statistic itself, but rather with the way it is used. Basically, people take it too seriously.

The JIF is used as a measure of quality of science: it’s use to assess people and departments, not journals. And this can affect funding decisions, and hence people’s careers. If we are to use a numerical metric to make judgements about the quality of science being done, then we need to be sure that it is actually measuring quality. The complaint is that the JIF doesn’t do a good job of this.

But a strong argument can be made that we do need some sort of measure of quality. There are times when we can’t judge a person or a department by reading all of their papers: a few years ago I applied for a job for which there was over 600 applicants. Imagine trying to read 3 or 4 papers from each applicant to get an idea about how good they are.

Enter the altmetrics community. They are arguing that they can replace the JIF with better, alternative metrics. They are trying to develop these metrics using new sources of information that can be used to measure scientific worth: online (and hence easily available) sources like twitter and facebook (OK, and also Mendeley and Zotero, which make more sense).

Now, I have a few concerns about altmetrics: they seems to be concentrating on using data that is easily accessible and which can be accumulated quickly, which suggests that they are interested in work which is quickly recognised as important. Ironically, one of the criticisms of the JIF is that it only has 2 year window, so down-grades subjects (like the ones I work in) which have a longer attention span.

But I also have a deeper concern, and one I haven’t seen discussed. It’s a problem that, if it is not solved, utterly undermines the altmetrics programme. It’s that we have no concrete idea what it is they are trying to measure.

The problem is that we want our metrics to capture some essence of the influence/impact/importance that a paper has on science, and also on the wider world. But what do we mean by “influence”? It a very vague concept, so how can we operationalise the concept? The JIF at does this by assuming that influence = number of citations. This has some logic, although it limits the concept of influence a lot. It also assumes that all citations are equal, irrespective of the reason for citation or where the citing paper is published. But in reality these things probably matter: being cited in an important paper is worth more than in a crappy paper that nobody is going to read.

But what about comparing a paper that has been cited once in a Very Important Paper to one that has been cited three times in more trivial works. Which one is more important? In other words, how do we weight number of citations against importance of where they are cited to measure influence?

I can’t see how we can even start to do this if we don’t have any operational definition of influence. Without that, how can we know whether any weighting is correct? Sure, we can produce some summary statistics, but if we don’t even know what we’re trying to measure, how can we begin to assess if we’re measuring it well?

I’ve sketched the problem in the context of citations, but it gets even worse when we look at altmetrics. How do we compare tweets, Facebook likes, Mendeley uploads etc? Are 20 tweets the same as one Mendeley upload? Again, how can we tell if we can’t even explicate what we are measuring?

If someone can explain how to do this, then great. But I’m sceptical that it’s even possible: I can’t see how to start. And if it isn’t possible, then what’s the point of developing altmetrics? Shouldn’t we just ditch all metrics and get on with judging scientific output more qualitatively?

Unfortunately, that’s not a realistic option: as I pointed out above, with the amount of science being done, we have to use some form of numerical summary, i.e. some sort of metric. So we’re stuck with the JIF or other metrics, and we can’t even decide if they’re any good.

Bugger.

Posted in The Society of Science, Uncategorized | 41 Comments

A new really old version of The Elements

I hope you’re all fans out Tom Lehrer (“Mr. Lehrer’s muse [is] not fettered by such inhibiting factors as taste.” – NYT, apparently). Well, via those Improbable Research (“Mr. Abraham’s muse [is] not fettered by such inhibiting factors as taste.” – NYT, possibly) comes a new old version of Lehrer’s classic The Elements:

The lyrics are below the fold.
Continue reading

Posted in Uncategorized | 3 Comments

Research with impact

After Stephen’s posts about impact factors and the like, I have a couple of serious posts brewing. But for now (and because it’s Friday), I want to admit to my reaction today to an advert I got about a journal, which told me that I should Stand Out in my Field, and Be Visible by submitting to this journal (run by a reputable publisher). One of the reasons for publishing there was that the journal has High Impact: it’s impact factor is 1.95.

“Pah!” I thought. I’m executive editor of a journal with an impact factor over 5. In fact 5.093 (that 3 at the end is terribly important, no matter what anyone tells you). Why would I want to publish is such a lowly journal?

On the other hand, the journal has published a paper about TARDIGRADES IN SPACE. That beats impact factors any day.

Posted in Science Publishing, Silliness | 9 Comments