Altmetrics: what’s the point?

A couple of weeks ago Stephen (of this parish) generated a lot of discussion when he complained about the journal impact factor (JIF). I must admit I feel a bit sorry for the JIF. It’s certainly not perfect, but it’s clear that a lot of problems aren’t with the statistic itself, but rather with the way it is used. Basically, people take it too seriously.

The JIF is used as a measure of quality of science: it’s use to assess people and departments, not journals. And this can affect funding decisions, and hence people’s careers. If we are to use a numerical metric to make judgements about the quality of science being done, then we need to be sure that it is actually measuring quality. The complaint is that the JIF doesn’t do a good job of this.

But a strong argument can be made that we do need some sort of measure of quality. There are times when we can’t judge a person or a department by reading all of their papers: a few years ago I applied for a job for which there was over 600 applicants. Imagine trying to read 3 or 4 papers from each applicant to get an idea about how good they are.

Enter the altmetrics community. They are arguing that they can replace the JIF with better, alternative metrics. They are trying to develop these metrics using new sources of information that can be used to measure scientific worth: online (and hence easily available) sources like twitter and facebook (OK, and also Mendeley and Zotero, which make more sense).

Now, I have a few concerns about altmetrics: they seems to be concentrating on using data that is easily accessible and which can be accumulated quickly, which suggests that they are interested in work which is quickly recognised as important. Ironically, one of the criticisms of the JIF is that it only has 2 year window, so down-grades subjects (like the ones I work in) which have a longer attention span.

But I also have a deeper concern, and one I haven’t seen discussed. It’s a problem that, if it is not solved, utterly undermines the altmetrics programme. It’s that we have no concrete idea what it is they are trying to measure.

The problem is that we want our metrics to capture some essence of the influence/impact/importance that a paper has on science, and also on the wider world. But what do we mean by “influence”? It a very vague concept, so how can we operationalise the concept? The JIF at does this by assuming that influence = number of citations. This has some logic, although it limits the concept of influence a lot. It also assumes that all citations are equal, irrespective of the reason for citation or where the citing paper is published. But in reality these things probably matter: being cited in an important paper is worth more than in a crappy paper that nobody is going to read.

But what about comparing a paper that has been cited once in a Very Important Paper to one that has been cited three times in more trivial works. Which one is more important? In other words, how do we weight number of citations against importance of where they are cited to measure influence?

I can’t see how we can even start to do this if we don’t have any operational definition of influence. Without that, how can we know whether any weighting is correct? Sure, we can produce some summary statistics, but if we don’t even know what we’re trying to measure, how can we begin to assess if we’re measuring it well?

I’ve sketched the problem in the context of citations, but it gets even worse when we look at altmetrics. How do we compare tweets, Facebook likes, Mendeley uploads etc? Are 20 tweets the same as one Mendeley upload? Again, how can we tell if we can’t even explicate what we are measuring?

If someone can explain how to do this, then great. But I’m sceptical that it’s even possible: I can’t see how to start. And if it isn’t possible, then what’s the point of developing altmetrics? Shouldn’t we just ditch all metrics and get on with judging scientific output more qualitatively?

Unfortunately, that’s not a realistic option: as I pointed out above, with the amount of science being done, we have to use some form of numerical summary, i.e. some sort of metric. So we’re stuck with the JIF or other metrics, and we can’t even decide if they’re any good.

Bugger.

This entry was posted in The Society of Science, Uncategorized. Bookmark the permalink.

41 Responses to Altmetrics: what’s the point?

  1. Neil says:

    Again, it’s a journal-level metric rather than article-level, but have you seen the eigenfactor stuff (http://www.eigenfactor.org/index.php)? That considers the ‘are all citations equal?’ question – and it uses a 5-year window too.

    • Bob O'H says:

      It’s part of the reason I wrote this post – it re-weights citations, but how do we know what it’s doing is correct (or even reasonable. or reasonably correct)? The underlying idea is about information flow, so might be one approach. But the paper describing it puts the method in the context of efficiently describing a network. But how does this map onto a definition of influence? It might implicitly define influence, but I don’t think this is how to do things: otherwise we could define influence by the JIF, and not worry about any of the problems!

  2. I think you’re missing the point here.

    The philosophy of altmetrics as I understand it, is to diversify the ways in which impact is measured, including even qualitative assessments. Particularly away from such extremely narrow and loose proxies for impact like the Journal Impact Factor (one cannot stress the word *journal* too much there, it bares little relevance to the impact of individual articles!).

    No one single measure within the altmetrics suite could be considered on its own to be THE proxy for impact. The impact of each alternative measure indicates a different kind of impact.

    Some measures like Facebook ‘likes’ and Google +1’s are clearly more likely to be measures of popularity and/or outreach, rather than measures of excellent scientific contribution. But this doesn’t mean they should be completely castigated – it just requires a nuanced understanding of what each component of the altmetrics suite is actually a reliable proxy for.

    Citeulike & Mendeley bookmarking stats are likely to indicate the article is widely read by academics. Other stats may indicate other facets of the broad spectrum of desirable impact outcomes.

    One of the strongest, most unobjectionable ‘altmetric’ measures, which IMO scarcely deserves to be called altmetric because it’s so bleedingly obvious, it’s weird to consider it as ‘alternative’ in any way is the Article Impact Factor (aka number of citations to that individual article). We clearly have readily implementable web-technology to track a large variety of altmetrics, we just need the publishers to allow them to be implemented.

    “But what about comparing a paper that has been cited once in a Very Important Paper to one that has been cited three times in more trivial works. Which one is more important? In other words, how do we weight number of citations against importance of where they are cited to measure influence?”

    If publishers released Open Bibliographic Data, we could actually calculate this. If your paper was cited by a paper, which in turn was cited by say >50 papers one could assign a higher weighting to that citation that your paper attracted, relative to a citation from a smaller less cited paper. It is *only* certain publishers that are blocking this from happening.

    I’m certainly not against measuring impact qualitatively. This is important too, as you rightly mention. But I think qualitative assessments can be blended with a broad suite of altmetrics to encapsulate a wider measurement of total impact. They are not in opposition to each other. Thus I think its wrong to ditch all metrics. They should be retained, but not overemphasized or over-interpreted within a context of broad measurement of impact.

    • Bob O'H says:

      The philosophy of altmetrics as I understand it, is to diversify the ways in which impact is measured, including even qualitative assessments.

      I’m aware of that, and almost wrote a paragraph about it, but thought it might deflect from my main point.

      I agree that influence is multidimensional. But what are these dimensions? Basically, I can use exactly the same argument, but with an added curse of dimensionality. There are now p qualities that have to be defined operationally, and if you can’t define any of these, then you’re just producing lots of dubious numbers rather than just one.

  3. Altmetrics based on social media will tend to reflect the size and connectivity of one’s social network, not just importance. Activity on social media might well translate through to higher citation. But I’m not convinced that altmetrics will correlate closely with quality. I agree that we need to define what we mean by quality and go from there. Starting with citations, or variants of it, is a short-cut to quality that might lead us astray.

    • “Starting with citations, or variants of it, is a short-cut to quality that might lead us astray”

      But that’s exactly what the Journal Impact Factor (JIF) relies upon – citations to the journal, and this is what the current ‘norm’ metric, and thus why a suite of altmetrics might be better.

      Evaluation panels clearly need some form of metrics on a pragmatic basis to sort through huge numbers of competing applicants, thus we need to be able to find *better* metrics than the JIF which supposedly is the most commonly used metric (sadly) for evaluating the performance of academics, although this recent Nature piece does provide some evidence to the contrary: http://www.nature.com/nature/journal/v489/n7415/full/489177a.html

      “Altmetrics based on social media will tend to reflect the size and connectivity of one’s social network, not just importance. Activity on social media might well translate through to higher citation.” I agree

      “But I’m not convinced that altmetrics will correlate closely with quality.”

      Here I disagree, ‘altmetrics’ or article-level metrics like the total number of citations to the article, adjusted for the number of years it has been available is likely to be a good indicator of quality in many fields e.g. Biology (though perhaps not all, I have heard there are many excellent papers in maths, that whilst technically excellent and very high quality are rarely cited).

      • Bob O'H says:

        How do you know that (adjusted) number of citations is “a good indicator of quality”? How do you make the comparison?

        • May I suggest an anecdotal experiment: In the same biological* journal read 5 papers published in 2010 that have not been cited. Now read 5 papers in that same journal, also published in 2010 that have greater than say 10 citations.

          I *think* you’ll find based upon such a qualitative, personal assessment that with any such survey the highly cited papers set will be more likely to be innovative, groundbreaking, (or to be fair, the one potential but known downside of citation-counting) …controversial.

          Ergo citations in some fields, particularly biology can be a valid proxy for good quality science (depending on how you define quality), with the exception of controversial papers, that can also get moderately highly cited through controversy alone rather than scientific quality. If this bias is known, it can be checked for. I don’t support ‘blind faith’ in metrics, just judicious intelligent usage of them supported by additional assessments.

          Furthermore, along with Open Bibliographic Data & Open Access to the full text of articles, we also have the technology to apply NLP techniques such as Sentiment Analysis to determine in an automated way (with good precision and recall) whether the citation given was postive, neutral or negative about the about the cited paper. This is very futuristic, and I don’t see it being systematically applied any time soon (I would be thrilled if it was!), but it would nicely disentangle the problem of citations to very bad papers because they are bad, and ‘good’ citations to good papers. In much the same way that some metrics exclude self-citations to correct citation rates.

          * I’m familiar with this field, therefore I suggest biological rather than mathematical or chemical. I’m more confident you’ll see the difference yourself there.

          • Bob O'H says:

            OK, so your measure of quality is subjective, and the definition ostensive. That doesn’t help much, does it?

          • Alan Haynes says:

            I see another issue here – just because a paper hasnt been cited doesnt mean its not high quality. Perhaps its simply in a field/subfield that comparatively fewer people are working in. It doesnt mean that the science or the paper itself is low quality…does it?

  4. Euan says:

    The good thing about alt-metrics getting lots of attention is that it means clever people read up on the subject, think about it and then ask pertinent questions, like yours. 😉

    The downside is that it’s easy to fall into the trap of believing that the field is more mature than it really is. The other trap is believing that one’s own idea of what alt-metrics means is the same as everybody else’s so to be clear everything after this is only my personal view…

    In my opinion the principles of alt-metrics are:

    * Rather than journal level, look at the article level – or whatever other level is appropriate
    * Measure different kinds of impact – you pick what those kinds are (the hope is that community driven standards will emerge here)
    * Use whatever data is appropriate to the impact you want to measure
    * Try to make qualitative as well as quantitative judgements easier

    The easy bit – hopefully – is agreeing that this stuff is a good idea in principle.

    The second bullet has a big “more research required” hanging over it, as you pointed out. We’ve only just gotten to the point where aggregated datasets are big enough to do this, so there should be lots happening over the next twelve months.

    That said you can bite off a smaller piece of the problem and come up with satisfactory answers. Altmetric (which I work on, so obv. I’m biased) tries to measure attention (!= quality) and then give users the data to work out how much of that attention represents the sort of impact they care about (if a bot tweets in a firehose and nobody is around to read it, does it have an impact? That sounded better in my head).

    Is a single number representing “attention” all you need to measure impact? No. Is it a replacement for the JIF or citation counts? Definitely not. Is it a useful metric we didn’t have before? Yes, for many use cases.

    > It’s that we have no concrete idea what it is they are trying to measure

    … so in sort-of answer to this: it’s not that there’s no concrete ideas, more that alt-metrics in general isn’t focused on answering any one question in particular.

    Altmetric is focused on one particular area of metrics, PLoS ALM another, Impact Story yet another (even if there’s lots of overlap). They’re all at varying but early stages of development and without a doubt need yet more data, more analysis and critical thinking.

    > they seems to be concentrating on using data that is easily accessible and
    > which can be accumulated quickly, which suggests that they are interested
    > in work which is quickly recognised as important

    Great point. IMHO alt-metrics shouldn’t be wedded to any one type of data source – it’s just that social media / reference manager data is what

    • Euan says:

      I should’ve checked the comments again before posting – +1 to what Ross says above.

    • Bob O'H says:

      Thanks for your reply. Your point about altmetrics not being a mature subject is well made. In population genetics/molecular ecology I’ve seen the same thing happen several times (starting with RAPDs): a new method is found, and people rush to use it, and publish “Using TLAs to study My Organisms” in high impact journals. After a couple of years people start to realise that the method isn’t the best thing since sliced bread, and that there are problems to be overcome. Only after that does the real work start.

      Altmetrics looks like it’s still in the first phase: everyone’s terrible excited and doing lots of stuff. At some point it’ll have to reach the more critical phase.

  5. DrugMonkey says:

    This is really very simple. The correct alternative measure of article quality is the one that makes *my* papers look the best.

  6. “OK, so your measure of quality is subjective, and the definition ostensive. That doesn’t help much, does it?”

    My last comment for now:

    You’re framing the usage of altmetrics around ‘quality’ (which I sense may be slightly different from measurable impact, academic, economic, social…). Quality in terms of research IS subjective. I don’t believe there is a completely 100% objective way to measure the quality of research. Research ‘quality’ itself is therefore a subjective term. That isn’t a problem of altmetrics

    • Bob O'H says:

      Michael was the one who introduced quality, not me! I agree that its not a problem of atmetrics, but it’s one it’ll have to face, unless someone can show how to quantify academic or social impact.

  7. Pep Pàmies says:

    Altmetrics (tweets, likes, blog posts, press releases,..) measure above all newsworthiness and quick, public appeal. I would be surprised if they end up having any significant correlation across fields with metrics based on long-term citations.

    • Altmetrics count not just “tweets, likes, blog posts, press releases,..” . I think the focus on these is unfair.

      One of the strongest altmetrics is counting citations to individual articles – “Article Level Metrics” in PLoS terminology/implementation. Do citations only count newsworthiness and quick, public appeal? I think not

      Don’t throw the baby out with the bathwater with respect to altmetrics – they are useful!

      • Pep Pàmies says:

        Quick measures of newsworthiness and public appeal are useful, of course. But they should not be interpreted as a proxy for future citations.

        Article-level metrics have existed for decades; it’s good that there is now better access to them.

        • Mr. Gunn says:

          Actually, in some cases they are in fact proxies. Eysenbach showed that for a subset of the literature, one which you might argue is more amenable to twitter-based influence, tweets do in fact correlate with later citation: http://www.jmir.org/2011/4/e123/

          Furthermore, Bar-Ilan, Priem, and others have shown that Mendeley readership is quite well correlated with citation:
          http://altmetrics.org/altmetrics12/bar-ilan/
          http://altmetrics.org/altmetrics12/priem/

          So yes, the field of altmetrics is new and developing, but the evidence is accumulating that it is a valuable indicator or more and different kinds of impacts. Each community has to define the metrics that are appropriate to their application at the time.

          • Bob O'H says:

            What do you mean by “impacts”? Do you just mean other statistics, like citations, or something else?

          • Pep Pàmies says:

            Tweets can of course loosely correlate with citations, but metrics based on tweets are far from having predictive power. As the authors of the study you linked to acknowledge, they measure different things:

            “Tweetations should be primarily seen as a metric for social impact and knowledge translation (how quickly new knowledge is taken up by the public) as well as a metric to measure public interest in a specific topic (what the public is paying attention to), while citations are primarily a metric for scholarly impact. Both are somewhat correlated, as shown here, but tweetations and citations measure different concepts, and measure uptake by or interest of different audiences (Figure 12).”

            Similarly, the authors of the studies on Mendeley readership acknowledge that

            “The significance strength is similar to what was reported in previous studies, and seems to indicate that readership counts measure something different from citations.”

            and

            “Articles cluster in ways that suggest five different impact “flavors,” capturing impacts of different types on different audiences; for instance, some articles may be heavily read and saved by scholars but seldom cited. Together, these findings encourage more research into altmetrics as complements to traditional citation measures.”

            I would be surprised if metrics based on first impressions or quick reading ever become a substitute for citation-based metrics.

            Metrics for media attention will be useful, but we need common metrics that correlate with future citations with sufficient accuracy. Field-specific metrics can be useful, but this ever more open and more international world needs a common cross-field, cross-country metric language that anyone can speak.

          • Mr. Gunn says:

            Bob, by impact I mean that a bit of work has affected or influenced or otherwise shaped further studies or other academic outputs such as code, presentations, tools, etc. I’m using impact as the umbrella term all the different ways in which a bit of research can be re-used.

            Pep – The part of your statement I agree with is that different metrics measure different things, as I said before. The part I disagree with is where you seem to equate altmetrics with “metrics based on first impressions or quick reading”. Of course the authors of the studies I cited aren’t going to suggest that tweets can replace citations. It’s way to early to begin to suggest something like that. But they do say there are correlations and they do make a really good case for different metrics reflecting different kinds of impacts. Perhaps this is where you were going with your suggestion of a common “metric language” as opposed to one single common metric.

  8. Jenny Delasalle says:

    Interesting question about the relationship between citations and quality. There are some academic studies correlating the citation scores of journal articles with their RAE rankings, most notably one from Loughborough: https://dspace.lboro.ac.uk/dspace-jspui/handle/2134/293

    It was after these kinds of studies were published that proposals were made to include citation metrics in the information to be considered by REF panels. All of the various units of assessment which are to use citation metrics (mostly STEM subjects) are using measures from Elsevier’s data. The exception is Computer Science who are also looking at Google Scholar’s citation data, interestingly. In any case, the emphasis is still very much on peer review. The point is that if measures are to be used, they will be used by those who understand them and the context of the research work.

    Meanwhile, the classic 2 year Journal Impact Factor that you mention is based on Thomson Reuters’ data.

    The basic difference in their data sets is that TR has citations dating further back in time (no relevance to REF but might impact on your h-index), whilst Elsevier covers more journal titles. Elsevier claim to cover the same titles that TR cover but I’m not convinced. (I’m a Librarian: I take some convincing!) And Google Scholar’s data is scraped by machines and not checked for quality in the same way as either TR or Elsevier’s data but it does include citations from books and grey literature that are published on the web.

    Meanwhile, you’re right to question what it is that can be measured by Altmetrics. Personally, I think they are useful at the level of the individual: if I blog or tweet about my paper in the repository, can I see more hits to that paper as a result? Is that because of re-tweets or the number of followers that I have? Has my recent symposium resulted in greater use of the hashtag for my topic on twitter?

  9. Jeremy Fox says:

    I’m with you Bob: “impact”, “influence”, “quality”, whatever, is not only multidimensional, but those dimensions are ill-defined, aren’t well-suited for being precisely operationalized, and aren’t and can’t be captured by any existing or likely-to-exist altmetrics. So like you, I have no idea how to interpret altmetrics, and no idea how to figure that out. And even if I did know how to interpret them I’d just find that they were only rough-and-ready measures of a couple of the many dimensions of “impact” or “influence” or “quality” or whatever, and further that they were mostly just telling me what I already knew.

    I proudly count myself as a very quantitative guy, but there are plenty of things that we care about in life and that we might like to summarize in a few numbers that just can’t be summarized in a few numbers (or even many numbers).

    • Dr. Gunn says:

      It’s true, many things in life can’t be summarized in a single number, and research is one of them. Since we’re currently summarizing research in a single number, any change to the status quo would be desirable, from my POV. Maybe it’s trading one master for another, but at least in the case of altmetrics, there are multiple metrics (including the traditional ones like citations) and the data would be open and available for all, which is most certainly not the case today.

      If the choice were altmetrics OR inpact factor, proponents of altmetrics would have a much heavier burden to show what they actually measure, but that’s not what they’re saying. They’re saying, ‘Why don’t we look at more dimensions than just counting citations.”

      • Bob O'H says:

        I find that a curious argument – if we produce one steaming pile of shit, we have to justify it. If we produce 20, it’s enough to say “look, we have 20 piles to choose from”.

        You’re exemplifying part of the reason I wrote this post – my impression was that there wasn’t a lot of introspection in the altmetrics community, asking if what they were doing was actually any better. I was half-expecting to be told that I hadn’t read enough, and that actually the community had been thinking about this. It’s a bit depressing that instead I’m given the impression that the community doesn’t even care.

        I think this is a shame. Partly because I think it’s a waste of the community’s time and effort if it’s not even trying to filter out the ideas which don’t work. But also because I suspect one of two things will happen: either altmetrics will be almost totally ignored by the community, because it is just producing a confusing array of statistics that nobody understands, or one statistics will be taken and used, regardless of whether it’s at all meaningful (this is essentially the situation we’re in now, with the JIF and the h index). Either way, I’m not sure we gain.

      • Jeremy Fox says:

        In what ways precisely IS research currently summarized in a single number, namely impact factor? Search committees at my university don’t routinely look at impact factors when making decisions on who to interview. My head of department, dean, and university promotion committee at my quite large university do not routinely look at impact factors when making decisions on who to hire, tenure, or promote. Granting agencies in Canada don’t ask for, and applicants mostly don’t choose to provide, impact factor data on grant applications. I’m sure the situation varies from institution to institution and country to country. But the notion that we *have* to choose *some* number or numbers in order to decide hiring, tenure, promotion, and funding just isn’t true. As evidenced by the fact that many, many such decisions, by many, many people and organizations, are *not* based on impact factor (or h-index, or whatever altmetrics you care to name).

  10. Mr. Gunn says:

    I cited several examples of research in an earlier comment, from a conference where exactly these sorts of things were discussed. There’s another such meeting this weekend.

    Would it help for you to be told that you’re just not paying attention?

    • Bob O'H says:

      (a) that was over a month ago – I must admit had forgotten and hadn’t re-read up-thread.
      (b) now I look back, I see you defined “impact” in an incredibly broad way. OK, fine at one level, but it comes back to my main point . how do you operationalise that? All your citations do is show that some metrics are correlated, which is not surprising (but still good to know). But it still does nothing to say that the metrics are useful, unless you can show that what they are being compared to is useful, and that the correlation is high enough.

      None of those papers answer the main question I was asking – how do we define impact/quality in an operational way?

      • Jason Priem says:

        how do we define impact/quality in an operational way?

        I think that’s a really important question! A lot of discussions in this space do end up a bit confused thanks to different understandings of these terms. I generally feel like “quality” is a subjective judgement, while “impact” is a bit more objective, since it’s referring to how an article (or dataset or whatever) has somehow changed something(s).

        Of course, there are many sorts of impacts, and a lot of things research can change: medical treatments, government policies, public opinion, disciplinary boundaries, lexicons, experimental methods, teaching approaches, theoretical stances, and of course much more. Not all of these are equally valuable to everyone; heck, some impacts might even be negative for some folks.

        So, I think there are a lot of ways to operationalize impact, depending on what you care about. I think that’s not often been as clear as it should be in policy, funding, and tenure decisions. I’m really excited to see this starting to change, to see growing recognition that “impact” can’t be just one thing.

        The exciting thing about altmetrics is that we can start maybe finding ways to measure impacts that were before unmeasurable. Not everyone is going to care about impact on public education, for example. But some funders and admins, it turns out, do care. For them, citations from Wikipedia, science blogs, or mainstream media may be valuable indicators.

        Not everyone is going to care about immediate impacts on informal scholarly discourse. That’s cool. But for those that do (and there good reasons to, since this informal discourse often drives scholarship more than formal publication), tracking the blogs and Twitter feeds of selected scholars may prove a useful indicator.

        And there are tons of other cool potential indicators of various impacts out there, many of which have been studied in the scientometrics community for a while now, and some of which are pretty new: citations, patents, clinical guidelines, acknowledgements, Nobels, newspaper articles, doctoral committee membership, collaborations, grant awards, conference acceptances, reference manager inclusion, downloads, hires, hyperlinks, social bookmarking, inclusion in syllabi…and many more.

        The challenge for, and promise of, altmetrics is to start using all these indicators, together, to tell this broader and finer-grained story of multiple impacts. To do that, we’re going to need a lot more social science describing the properties of these newer indicators, assessing their validity and examining their value. I’m excited that a lot of that’s starting to appear (in the recently-published PLoS Altmetrics Collection, for example), and excited about continuing to push it forward.

        In the meantime, these metrics and the systems that gather them are helping to provoke thoughtful discussions about what we mean by impact(s), and I think that’s a really positive thing.

        • Bob O'H says:

          Thanks for your reply. I think your terminology clarifies my question – we want to measure quality (but can’t), so how do we make sure our measures of impact are good approximations to quality? (and yes, quality will mean different things to different people)

          One thing that worries me is that if you don’t tackle this question, you’ll find that people start using metrics that are crap, but are easy to calculate, like we do now with the impact factor(*) and the h-index.

          (*) actually, I don’t think the impact factor is crap – it’s not great but not that bad either. The problem is we take it far too seriously, and that leads to all sorts of crapness.

    • Jeremy Fox says:

      What Bob said. Studying how different metrics are or aren’t correlated just isn’t an answer to the question Bob posed.

  11. Mr. Gunn says:

    Everyone says they don’t look at that stuff, but journal prestige something all PIs talk about, and that prestige is not based on data. Having more data for more dimensions of importance can’t be a bad thing and it won’t replace the discussions that happen in tenure committees, it’ll just give them a little more to chew on.

    Tenure committees are the main point anyways, it’s giving people a reason to do things beyond just publish papers.

    • Jeremy Fox says:

      Ah. I take it that, by “altmetrics”, you mean not just “metrics besides impact factor for assessing publications” but “metrics for measuring pretty much anything that academics do, or might be asked to do”. That wasn’t the subject of the post, but ok…

      Well, first of all, tenure committees already have various numbers besides publication metrics to chew on. Number and size of classes taught. Student satisfaction surveys for assessment of teaching. Grant dollars brought in. Number of graduate students supervised, and graduate student committees served on. Numbers of departmental, faculty, and university committees served on. Etc. Insofar as tenure committees give undue weight to publications and grant dollars (and I’m not saying they do or don’t), it is *not* because those are the only numbers available to them.

      Further, every tenure proceeding in North America that I’ve ever heard of requires external letters of reference, and one or more narrative documents from the faculty member being considered, and gives *great weight* to those letters and narrative documents. The reason for this is that those letters and narrative documents can capture all sorts of things that tenure committees want to assess, but that are not reducible to any set of numbers. But surely you’re aware of this?

      You think the “main point” is that tenure committee need a “little more to chew on”, and that *that* will change the incentives academics have to publish papers? You can’t seriously be claiming that the ultimate source of “publish or perish” is the particular numbers available to tenure committees, can you? I can hardly believe anyone would claim such a thing, but if that’s not what you mean, I’m not sure what you do mean.

      Sorry, but I’m afraid I am very unclear on what problem you even think you’re solving, much less how “altmetrics” of any sort could solve it.

  12. Mr. Gunn says:

    Sorry, I was typing that response from my phone in the cab. I mean to say tenure committees aren’t the main point. Tenure committees will follow what funders do, and funders want to know how best to allocate their funds. Many disease advocacy groups have started to say to hell with academic research, we’ll hire our own researchers. I’ll come back to that..

    So there are two things here: research belongs to more than just researchers. All the other stakeholders – businesses, journalists, the public – are dearly lacking some way beyond press releases telling them “X causes cancer” to know what’s important. There’s a huge amount of research that has gone into figuring out what links/ads/etc to show to someone who lands on your website, so what would happen if we turned some of this technology to solving real problems, instead of helping wealthy white people buy more crap? Would you look down your nose at an algorithm that proposed to be better at finding the research you should be reading than your own manual searches?

    I apologize for sounding irritated above, but I guess I just don’t understand why anyone would be against having more and better ways of finding relevant information.

    One thing altmetrics specifically proposes to help researchers with is getting credit for something other than publications – for sharing data, for releasing code developed in the course of their work, or for doing public outreach and education. These things are more than just “stuff I ought to do”. It’s part of the social contract of doing research with public funds, but researchers often don’t do it because they don’t have time, or in other words, if the only incentive is to publish papers and write grants, we shouldn’t be surprised when all anyone does is write grants and publish papers, but with so much negative data swept under the rug to trip up future generations of researchers and so much code re-written, so much data re-generated, all while we keep churning out papers, most of which aren’t reproducible….. It’s just hard to see how anyone could argue that looking at things from a new perspective via altmetrics is a bad thing, or even not worth trying. We’ve got to try something because the system is broken.

    Maybe the status quo has been good to you and you’re not happy with any disturbance in the way things have always been done. After all, if you’ve mastered the rules of the game, it sucks if people go and try changing them on you, but this isn’t really about changing the rules, it’s about making the game board a little bigger, with more ways to win, none of which come at the expense of the old ways.

    • Jeremy Fox says:

      Thanks for the clarification, much appreciated.

      I’m afraid that you’re lumping together so many issues that I find it difficult to have a productive conversation. I’m not sure what the undoubted incentives academics have to write papers and seek grants have to do with public access to the primary literature, what either of those things has to do with whether current research “solves real problems” or just “helps rich white people buy more crap”, and what any of those things has to do with the topic of the post, which is altmetrics for measuring publication “impact”. Without wanting to deny (or agree) that these issues might somehow all be connected so tightly that they all can only be discussed together, I’m afraid the connections are not sufficiently obvious to me to allow me to discuss these issues in the way you seem to want to discuss them. If you wish to put this down to my own failings or limitations, go ahead.

      I don’t appreciate the implication that, because my views on some issues are different than yours, that I’m just someone who’s merely learned to “play the game” and that I feel threatened or scared or annoyed if anyone goes “changing the rules”. You don’t know me. Please don’t assume that you do. If you knew me, you would be aware that, in many respects, I “play the game” in a rather different manner than many researchers in my field, and that my time allocation often is quite contrary to the incentives I am faced with. I have restricted myself to discussing your positions, without trying to psychoanalyze the reasons why you hold those positions. I suggest that you may wish to show others the same courtesy.

      As you are not inclined to restrict discussion to the issue raised in the post, and since I am not prepared to commit the time required to discuss the huge range of issues that you want to link together and discuss, I won’t be responding to any further comments you wish to make. Any further discussion would simply involve us talking at cross purposes, and I have no desire to waste my time or yours.

  13. Actually, since articles, citations and venues create a (hyper) graph, it would be trivial to estimate some type of influence function where the statistic is not only related to the number of directly connected nodes. After all, pagerank _is_ a citation ranking.

Comments are closed.