I gave a talk last night at the Royal College of Physicians, in the Research Information Network‘s series on Research in transition. The inestimable (and believe me, I’ve tried estimating him) Stephen Curry was there too; it was a veritable surfeit of ginger ninjas. The subject was quality assurance in the literature, and how we might perform QA in the web 2.0 world. I tackled the subject of motivation—how we might persuade busy researchers to get involved in post-publication literature assessment. Theo Bloom and Tristram Hooley were also there, talking about PLoS and using networks, respectively. Stephen tackled the whole problem from a user point of view.
Stephen had slides, and nobody else did. Therefore I can’t post any slides, but here’s the notes I spoke from. It should be noted that this is by no means a transcript.
Academic publishing—by which I mean publishing of research results and papers—is a strange beast.
It must be the only industry where the producers of the content actually pay for people to use, or access, their product. In a way, it’s a little like advertising, but in advertising you spend money in the hope that you will get more back when people buy your product.
A research author, on the other hand, spends money on research, spends money on getting the results of that research published, and has to pay again to access the research of others.
And most of us think this is normal.
I’m quite aware this could turn into a rant against the publishing industry, but I don’t want to go there. I’m more interested in this attitude of most researchers—what motivates them— and what that means for this kind of post-publication peer review kind of thing.
There is quite a post-publication commenting community already. Some of this is reasonably formalized—such as on researchblogging.org, or even the basis of a business model—Faculty of 1000, obviously. And as we’ve seen with the arsenic story (I’m assuming you all know about Rosie Redfield’s blog by now), there’s a huge informal network willing to make informed comment, often in great depth, using any kind of media available.
But these raises a couple of questions for us. The first, as a consumer of this kind of discussion, is “How do I trust this person, this comment?” Is it informed? The Arsenic life story is a case in point—why should I trust Rosie’s analysis, for example? I’m not going to talk any more about matters of trust, because I believe Tristram is going to address that.
No, I’m concentrating on the second question, the one I’ve just mentioned, which, from a producer’s point of view, is “Why should I comment?” Or if you like, “What is my motivation?”
I started off talking about money because it’s quite obvious that people don’t do this for money, and actually I don’t think there’s much expectation from bloggers and our Faculty members that they should get paid.
Yes, I’d quite like to be able to pay our Faculty Members, but if we’re not going to insult them we’d have to make it a decent sum, at least fifty quid per evaluation say—and we have 10,000 members, producing (at the moment) up to 1500 evaluations each month. You do the maths: that business model is not sustainable. Yes, we give away sweatshirts and mugs and laser pointers and whatnot to Faculty Members, but they’re thank-you presents, they’re never pitched as payment (because it would be insulting to imply they were).
Even in those blog networks that pay non-staffed writers—Science blogs dot com, the Guardian, I think that’s it—I’m not convinced that the money is why these people do it. Outside of the current day job I’ve certainly never received money for writing about other people’s science.
I guess it’s formally possible that people do it for fun. Here’s a selection of quotes that I found on Cesar Sanchez’s blog, from peer reviewers:
- This paper is desperate. Please reject it completely and then block the author’s email ID so they can’t use the online system in future.
- The biggest problem with this manuscript, which has nearly sucked the will to live out of me, is the terrible writing style.
- The writing and data presentation are so bad that I had to leave work and go home early and then spend time to wonder what life is about.
- Done! Difficult task, I don’t wish to think about constipation and faecal flora during my holidays!
- The peaceful atmosphere between Christmas and New Year was transiently disrupted by reading this manuscript.
- The trees are crap but, besides this, excellent work.
- The writing style is flowery and has an air of Oscar Wilde about it.
- The finding is not novel and the solution induces despair.
More seriously, I think that some people well gladly criticize published papers because there is something wrong with the science, and they feel a need to bring this into the open. I’m assuming that’s what was behind Rosie Redfield’s analysis of the arsenic paper. The other side of that coin is probably a desire to bring good research to a wider audience—but these I think are personal motivations that can’t address, more generally, the question “how are you going to try and motivate people to write about published research in a reasonable, critical, even-handed way?”
We can’t rely on benevolent disinterest.
We are looking for form of quid pro quo, essentially. It’s something we at F1000 have been thinking about for some time. What we’ve found is that it comes down to reputation, and “impact credits” if I might coin a phrase. Scientists are busy trying to do research, with the ultimate aim of publishing, I am loathe to admit, in a “high impact” journal, so that they can get the next grant, so that they can do research, with the ultimate aim of publishing in a high impact—you get the picture.
Anything that takes away from this has to be weighed carefully. And for many, perhaps most people, it’s just not worth it. Maybe thinking about the problems in terms of effort to reward ratio would be useful, and perhaps we can talk about that later. Especially in respect of the more senior people who would give credibility to any exercise of this sort.
At F1000 we worked very hard to recruit a critical mass of senior researchers—scientists and clinicians—who lend the exercise legitimacy. A lot of our day-to-day effort is spent in talking to these people, recruiting more, and of course getting them to write evaluations. Because of who we chose to make recruiting decisions—essentially already respected members of the scientific community—there is the cachet of being recognized by your peer group, simply on the basis of being a member of Faculty of 1000.
So getting people signed up is relatively easy, most people are honoured by it, and the authors of evaluated papers I talk to are very pleased to get into F1000. Some institutions will even take F1000-evaluated articles into account when recruiting. Which is very gratifying for us, but getting to people to actually write is a tad more difficult.
What we’re hearing, more and more, is that this very question of impact credits is what motivates people. We’re looking at getting our evaluations indexed, we want them to be citable—especially the longer ones that actually contextualize the science. I think when authors are comfortable putting F1000 evaluations, maybe even putting blog posts and friendfeed threads, in their publication records, then we might be on to something.
That’s what they care about.