A contentious paper came out towards the end of last year in the Journal of Medical Internet Research. That is a reasonably respectable title in its niche field and the author, Gunther Eysenbach, is a respected medical informaticist and e-health guru.
At first I loved the paper for its wonderfully silly and memorable neologisms. It talks of “tweetations” and “twimpact factors”:
- a tweetation is “a citation in a tweet (mentioning a journal article URL)”
- the twimpact factor is “the cumulative number of tweetations 7 days after publication of the article”
But disillusion quickly set in. The paper reports an analysis of the effect that tweeting about a research paper has on the subsequent number of citations to that paper. It concludes that:
Tweets can predict highly cited articles within the first 3 days of article publication. Social media activity either increases citations or reflects the underlying qualities of the article that also predict citations, but the true use of these metrics is to measure the distinct concept of social impact.
Phil Davis, at The Scholarly Kitchen blog, points out:
The main message of the paper is that highly tweeted articles were 11 times more likely to be highly cited, a result that makes a great 140 character headline but needs much more context for interpretation.
My main concern is that the group of articles being studied are articles published in the Journal of Medical Internet Research – 55 articles published in 2009 and 2010. Hence the results may not reflect the reality of broader biomedical research publishing. To be fair, Eysenbach acknowledges this:
as a journal about the Internet and social media [JMIR] has a sophisticated readership that is generally ahead of the curve in adopting Web 2.0 tools. However, this also limits the generalizability of these results: what works for this journal may not work for other journals, in particular journals that … do not have an active Twitter user base. …Journals that publish non-Internet-related articles have probably far lower tweetation rates per article, and it is also less likely that people tweet about articles that are not open access.
Davis has other reservations, noting that there is a highly-skewed distribution of tweets and citations and that many of the tweets studied were sent out automatically. The comments thread to his post has other trenchant criticism of the paper and a defence from Eysenbach.
Davis also questions the ethics of the inclusion in the reference list of citations to all the 55 papers studied. This has now changed – JMIR has issued a correction of the original article, removing the offending citations to dataset articles, and replacing them with a list of articles in an appendix..
The main point Eysenbach makes seem to be that “buzz in the blogosphere is measurable”, and his definitions of tweetation etc may be helpful to future studies of the broader literature, even if the actual numbers he has generated are not generalisable.
One of my department’s papers once broke into the top 10 Canadian trending topics on Twitter, and it has more citations than anything else we’ve published recently. QED?
Seriously though, it would be great to see the analysis extended to more traditional research papers. I think there’d still be a trend, as some papers certainly do see a lot of Twitter action when they’re first released (at least within my little circle), but maybe that’s just observation bias.
Twitter, blogs, and other social media act as advertising for articles. Scientific literature is published much faster than anyone can read it, even reading all of the table of contents exceeds what a researcher can manage, thus alternative ways to attract attention are likely to help an article get read and cited.
Which way does the causation flow, though? Cath – was your paper cited because it was tweeted, or was it tweeted because it was good (and therefore cited later on)?
Will tweeting about a not-very-good article increase its chances of being cited? I suspect not.
Ah, but citation is not the same thing as approval.
“Will tweeting about a not-very-good article increase its chances of being cited?
We could test that, you know… chose ten not-so-good papers, all tweet about five of them chosen at random, and see how they look in a couple of years. Or a larger number. Stats aren’t my strong suit, but with all the stats textbook quotes Bob’s peppering all over my blog maybe I’ll learn enough to do the power calculation!
Agreed, Henry. These kind of studies need so many caveats I wonder if there is really any point? It’s a bit like cooking an incredibly elaborate stew with many ingredients and tricky procedures, but using leather instead of meat. End result, despite all the trouble taken, inedible.
I’m sure it would have been cited a lot anyway (it was a cancer genomics “first” in a top tier journal), but I think the Twitter activity was driven largely by the attendant press conference and media hoopla. The story was first item on the CBC national news, and on the front page of the Globe and Mail (a national paper); I’m sure this MSM exposure reached many more people than the Twitter buzz!
What jumped out at me most is the assertion that JMIR’s readership is “ahead of the curve”, yet referencing “Web 2.0” which is a term that I never, ever hear anyone use any more.
What that might or might not mean is anyone’s guess.
Did you tweet this post, BTW?