Impact factors — RCUK provides a chance to act

If I had more time, this post would be shorter. But it explains how we have an opportunity to get UK research councils to help break the corrosive dependence of researchers on impact factors. Please at least skim all the way to the bottom to see how easy it is for you to participate.

15-3-2013: Please see update at the foot of this post for an important announcement from RCUK.

I had no idea when I clicked ‘publish’ last August that my ‘Sick of Impact Factors’ post would unleash such a huge response. Evidently I had pulled on a chain that everyone feels bound by. The post attracted over 180 comments and tens of thousands of page views. It is still getting over 2000 hits a month.

As I wrote in that post (and elsewhere), the abuse of journal impact factors (IFs) in the assessment of scientists applying for jobs, promotion or funding is a deep-seated and largely self-inflicted problem. It is retarding the uptake of open access, because the addictive lure of IFs inhibits some authors from choosing new OA journals and allows ‘high-impact’ journals to lever higher article processing charges (APC) from those paying for gold OA.

The response to the blogpost has been very gratifying but will ultimately be worthless if it cannot be harnessed to make the necessary shift away from a culture of dependency on impact factors. It seems to have influenced the thinking of at least one journal. Nature Materials cited my post in an editorial last month that warned its readers of the dangers of using IFs as a guide to the performance of individual researchers. I was pleased also to see that the journal’s Instructions to Authors now links to this editorial.

That’s a start and I very much hope other journals will follow this fine example. But for the shift to take hold we also need influential players such as universities and funding agencies to publicly disavow the use of impact factors in the assessment of individuals.

We should not underestimate how long this might take. In guidelines for the Research Excellence Framework (REF), which is currently gathering information on outputs to assess the quality of UK research, HEFCE has stated clearly that journal impact factors will not be used by its judging panels. However, there remains widespread distrust in the research community that this will actually happen. An informal survey of departmental practices around the country by Dr Jenny Rohn found that many are looking at IFs when deciding which of their researchers’ publications to submit to the REF. Clearly, old habits die hard.

But now there is a new opportunity to hammer one more nail into the impact factor coffin. On 6th March RCUK, the body that oversees the UK research councils, issued updated guidance on its new open access policy, which is due to take effect on 1st April this year. This policy has been much debated but I don’t want to rehearse those arguments again today. Instead I want to focus on a key point relating to impact factors.

While their OA policy indicates RCUK’s preference for immediate access funded by APC payments (gold OA), RCUK-funded authors can alternatively meet their obligations by depositing their final peer-reviewed manuscript in an institutional repository (green OA). The guidelines (PDF) seek to clarify the flexibility that is available to researchers in deciding which route to follow, which may well affect the particular journals that they should choose and that inevitably raises the question of impact factors.

The guidelines make it abundantly clear that the “choice of route to Open Access remains with the author and their research organisation” but how that plays out in terms of journal choices on the ground remains tricky. Section 3.5(ii) discusses the payment of APCs from the block grants that RCUK will provide to institutions. RCUK hasn’t specified upper or lower limits on APCs that are allowable although they are keen to drive down costs*:

“institutions should work with their authors to ensure that a proper market in APCs develops, with price becoming one of the factors that is taken into consideration when deciding where to publish.”

In pursuance of this admirable goal the document notes that:

“HEFCE’s policy on the REF, which puts no weight on the impact value of journals in which papers are published, should be helpful in this respect, in that it facilitates greater choice.”

However, RCUK fails to follow HEFCE’s lead with a statement of its own.

Later in the document (section 3.6(iii)) the issue of journal choice raised again (with my emphases in bold):

“Where an author’s preference is ‘pay-to-publish’ and their first choice of journal offers this option, but there are insufficient funds to pay for the APC, in order to meet the spirit of the RCUK policy, the Councils prefer the author to seek an alternative journal with an affordable ‘pay-to-publish’ option or with an option with embargo periods of six or twelve months.”

Admirable flexibility but the guidance again fails to offer researchers explicit reassurance on the question of impact factors, which cannot at present be disentangled from the decision about which journal to select.

The remedy for this is straight-forward: the guidelines should be amended to include an explicit and public reassurance to researchers that RCUK and their associated funding councils will put in place instructions for reviewers and panel members to disregard impact factors in assessing all funding applications. Given RCUK’s evident approval of HEFCE’s IF-blind policy, I expect them to be ready to embrace an opportunity to foster a real improvement in our culture of assessment. The Wellcome Trust already has a statement to this effect built in to its open access policy, affirming the principle that:

“it is the intrinsic merit of the work, and not the title of the journal in which an author’s work is published, that should be considered in making funding decisions.”

Every little helps, so perhaps you can help me to persuade RCUK to adopt a similar statement? Together we can provide a friendly shove in the right direction.

The consultation on the guidelines is open until next Wednesday, 20th March. I will be writing to RCUK’s Alexandra Saxon on that date to request that an explicit disavowal of the use of impact factors in the assessment of researchers is included in the revised guidelines (providing a link to this post to explain the reasoning). Please feel free to write in the same vein or, if it is easier, leave a comment here stating that you are happy to be included as a signatory on my email. Or send me an email (s dot curry at imperial dot ac dot uk). Please give your name, title and affiliation. I imagine RCUK will be more attendant to the views of UK-based researchers but there would be no harm in giving a sense of the global reach of the problem of impact factors.

Update (15-3-2013; 11:51): I am grateful to Alexandra Saxon, RCUK Head of Communications, who has this morning added a comment confirming that “RCUK will add a statement similar to the Wellcome Trust’s to the next revision of the guidance, due to be published towards the end of the month.”

I had suspected that RCUK would be sympathetic to our request but it is nevertheless great news to hear of this commitment. Readers should still fee free to indicate their support; I have told Alexandra that I will still write next Wednesday to communicate our collective desire to see abandonment of the use of IFs in assessing applications. My thanks to all who have offered support so far, in comments and emails.

*I realise the RCUK’s preference for gold over green OA entails higher transitional costs in the short term but would like to set that debate aside just for today.

This entry was posted in Open Access and tagged , , , . Bookmark the permalink.

64 Responses to Impact factors — RCUK provides a chance to act

  1. I completely support this and ask Stephen to add my name to any list arguing against Impact factors. IFs are one of the most UNscientific metrics ever devised. It is made worse by being compiled by unaccountable profit-driven companies running closed processes which are suspect both in coverage and algorithm.
    Please feel feel to quote this under CC-BY

  2. Adam Etkin says:

    A small point. Regarding your statement:

    “…the addictive lure of IFs inhibits some authors from choosing new OA journals…”

    Doesn’t this impact (pardon the pun) ANY new journal, not only those which are OA? Misuse of IF not only an OA issue.

    • Stephen says:

      That’s true. There is a general corrosive effect of impact factors on researchers — we need to find our way to a system that provides better measures of the performance of individuals (much discussed in the thread beneath the Sick of Impact Factors pos).

      But IFs are also an impediment to the adoption of open access for the reasons given above. And that provides (to me at any rate) a further incentive to escape their pull. I don’t underestimate the size of the task. Even if we manage to get RCUK to amend their guidance, that will only be a small step. But it will be another move in the right direction.

  3. Frank says:

    What about Cancer Research UK? They seem to be very quiet on Open Access but are also major research funders. It would be good to have them adopt a similar statement.

    • Mike Taylor says:

      You mean a similar statement on impact factors? Because they already have an OA policy as described in this FAQ:

      [CRUK] requires electronic copies of any research papers that have been accepted for publication in a peer-reviewed journal, and are supported in whole or in part by Cancer Research UK, to be deposited into Europe PMC once established. All deposited papers must be made freely accessible from the Europe PMC as soon as possible, and in any event within six months of the journal publisher’s official date of final publication.

      The policy is mandatory for all current grantholders and Cancer Research UK Institutes for papers submitted for publication after 1 June 2007. The policy was approved by the Cancer Research UK Scientific Executive Board (SEB) in March 2007.

    • Stephen says:

      CRUK certainly has an open access policy but as far as I can tell it doesn’t include any statement indicating that impact factors or journal names will be disregarded in the assessment of funding applications. Would be good if they too could follow suit.

      • Aoife Regan says:

        Hello my name is Aoife Regan and I am Head of evaluation at CR-UK. I heard we had been mentioned on this excellent blog and I am very happy to be able to respond.

        When assessing a new application or reviewing an existing award we expect our reviewers to consider the merit of the work contained in any publication and not the journal it is published in. CR-UK does not provide reviewers with the journal impact factor. They are given details on the most cited or relevant articles, regardless of where those articles have been published. Also it should be remembered that reviewers to consider publications alongside other indicators of quality and esteem, such as patents granted, other awards, keynote lectures given, citation in clinical guidelines etc.

        Whilst we never provide JIFs, our experience suggests that our committee members do sometimes consider the journal title to be a proxy for quality, particularly for more recent publications that have yet to build up citations.

        As soon as the dust settles on all the various consultations, we will be revisiting our OA policy, and will clarify our stance on the use of journal impact factor in the assessment of research. However, likes others on this thread I am not convinced that funders changing or clarifying their policies will solve the problem. Ask any researcher for their opinion on the top five journals in their field and they will have a ready answer, so how do we prevent them from asserting this opinion when reviewing grants? Should we exclude the journal title from submissions, effectively blinding reviewers to where an author has been published? Would you be happy to review on this basis?

        Also, if a new more effective article level metric was to come about, how can this be applied to more recent publications that yet to build up citations?


        • Stephen says:

          Many thanks for your contribution Aoife. Very good to hear that CRUK is also thinking about this issue.

          I think you are right to doubt that “funders changing or clarifying their policies will solve the problem”. As you say, the problem lies primarily with researchers themselves; we all have the habit of reaching for impact factors when short of time or considering work that is newly published or outside our immediate domain of expertise. However, getting funders to publicly disavow the use of impact factors in assessment is a small but important part of impelling the community to a long overdue culture change. It will take not only funders to echo the message but universities too.

          We will also need to foster alternative practices. Article-level metrics is one way to go though difficult to implement for recently published work. I remain nervous in any case about over-reliance on boiling things down to numbers. I don’t think omitting the name of the journal is practical.

          I wonder if a better way is to ask applicants to briefly summarise their recent work and its significance, written in a style that should be intelligible across inter-disciplinary boundaries? The BBSRC does this in a way at the moment since applicants are asked to write about their track record — this is an opportunity to give a good account of your recent work. Such a demand would have the benefit of focusing reviewers on the work itself and might also encourage authors to think a little more broadly about what they are doing and why. It might also lower the activation barrier to the composition of lay summaries for papers (which is something I would like to see incorporated into all papers as the open access literature expands).

  4. Please include my name.

  5. Great post and effort, Stephen. Please add my name as a signatory when you write to RCUK.

  6. Jim Woodgett says:

    I think its important to include support/feedback from non-UK based scientists for two reasons. Firstly, the REF reviewers are typically comprised of people not from the UK (for reasons of conflict of interest). Secondly, this avoids the dismissive criticism that its only self-interested people who might be affected by spurious REF rankings should IF be taken into account. In any case, happy to add my name if you want to.

    As an aside, while the use of JIF as a surrogate for actual quality is mistaken, I don’t think one can ignore differential quality in journals – and that there is a hierarchy. The mistake is one of guilt-by-association – that because a paper is published in a high IF journal means it is fated to high impact itself. This is demonstrably untrue but I will argue till the cows come home that EMBO J or Development, for example, have higher standards than 100 other journals I could name and as a consequence, I tend to read their TOCs. I also use PubMed to email me weekly keyword results that scape the entire published world. In other words, I can only afford, time-wise, to scan a small number of journals for papers that *may* be of interest to me (which aren’t in my narrow keywords). I will admit that limited list tends to comprise journals with higher IFs. Shoot me now 🙂

    • Stephen says:

      Jim – I agree there is still a hierarchy of journals, as long as you are looking at averages. I also suspect that many in the research community appreciate the spur that this hierarchy gives them to try harder. The problem arises, as we all know, when IFs are handed out as prizes for particular papers or researchers.

      The question of what added value arises from the editorial practices of different journals came up recently at a meeting organised by the NUJ and held at the Wellcome Trust:

      It’s a rather long video, I’m afraid, but covers a range of interesting topics. Afterwards I had an interesting conversation with Peter Lee from Cell Press (he is the editor of Immunity). He stoutly defended the role that he could play to bolster the quality of submitted papers. However, that takes a level of dedication and input that is not seen at many journals.

    • “Firstly, the REF reviewers are typically comprised of people not from the UK (for reasons of conflict of interest).”

      Not sure if this is true, in the panels most relevant to my department, 10/16 of the main panel are from UK Universities and 14/17 on the sub-panel. Whilst I don’t want to make assumptions about the individual panel members views on IF and how closely they will adhere to the HEFCE guidelines, if the reliance on IFs is so inherent in academia its no wonder that submitting academics may not trust REF practice to follow the theory.

    • Jim:
      Currently, there are two main metrics aligned with IF-based journal rank (weakly, but significantly, r2=~0.1-0.3): citations and perceived importance. Other metrics, such as retraction rate, replicability, effect size overestimation, correct sample sizes, crystallographic quality, sound methodology et al. either correlate *negatively* or not at all.

      In other words, if you are telling us that you primarily look at papers in hi-IF journals, you are also telling us that you focus on papers that are more likely to be retracted, more likely to have lower methodological standards, lower crystallographic quality (if this were your field) and that overestimate their effect size with too low a sample size.

      I think it’s quite clear that nobody needs to shoot you, your reading strategy does that for us.

      See references here for the data above:

  7. Pep Pàmies says:

    I would argue that the bad habit of using the Journal IF to assess individual researchers or papers is rooted in three fundamental reasons:

    1. We are a competitive bunch fighting for limited resources.
    2. There is too much to read and too little time to sift through what may be relevant.
    3. There is no available easy-to-grasp, accurate metric for the quality and/or relevance of a paper.

    Given premisses 1 and 2, the need for metrics and rankings becomes obvious. And I guess that in the absence of alternatives, the widely known and simple-to-understand IF became the lazy-man’s de facto metric. Too bad, but perhaps it was inevitable.

    Anyone facing a mountain of CVs or proposals to assess will be tempted to rely on shortcuts, such as the perceived prestige of the journals mentioned in such documents (which I guess largely correlates with the IF).

    The best way to fight this self-inflicted problem of using the IF beyond it’s remit (to assess journal quality) would be to find a much better metric. In view of the above propositions, I believe that only in the face of a much better something will we overcome the resistance to abandon habit and make the effort necessary for change. It’s all about incentives.

    Still, Stephen’s proposal is commendable.

    [Disclaimer: I am an editor at Nature Materials, and the man behind the editorial mentioned in this post.]

    • Stephen says:

      “Given premisses 1 and 2, the need for metrics and rankings becomes obvious.”

      Those premises certainly create a challenge but is it obvious? Ronald Vale would certainly have us try harder. As discussed on the Sick of Impact Factors thread, there remain significant concerns about all kinds of metrics, though article-level indicators are at least an improvement on journal-level measures.

      The problem is particularly acute when trying to judge work that is outside of your area of expertise. Within your own field, I think a quick scan of the abstract is often enough to make as assessment of the worth of the contribution that is being reported. Beyond it is much more difficult.

      One method that HHMI has used (and I’ve seen it in promotions procedures here at Imperial) is for the applicant to name their top four papers for a particular period and to write a short statement to explain why they are significant for a cross-disciplinary audience.

      • Mike says:

        there remain significant concerns about all kinds of metrics, though article-level indicators are at least an improvement on journal-level measures.

        … when comparing the citation/distribution rates of individual articles, which again, may not reliably indicate quality. I know you (Stephen) realise that the only way to assess quality is for an expert in the field to carefully read and judge an individual article on its own merits.

        Which highlights another major problem with REF – there are never enough experts on any submission panel to be able to properly judge all submitted articles.

        Pep wrote:

        1. We are a competitive bunch fighting for limited resources.
        2. There is too much to read and too little time to sift through what may be relevant.

        While we’re limited as scientists in how much money we can squeeze out of various resources, we arguably have it in our power to control how much time we have to read the relevant literature and allow it to correctly influence our own work. Publish or Perish is not a new concept (Richard Feynman was talking about it in or before the 50’s), but as a community, we still haven’t got to grips with the idea that publishing too soon is not a good strategy.

        As authors and reviewers (at individual article, or REF type level), we need to work harder to make sure we don’t use the increasing volume of literature as an excuse for not doing the best quality research.

  8. Excellent initiative – please add my name too.

  9. Julia Bardos says:

    Please include my name

  10. Fiona Nielsen says:

    Excellent post, and great initiative.
    Please add my name too.

  11. Alexandra Saxon says:

    Stephen – thanks for your blog and making this point.

    I am pleased to confirm that RCUK will add a statement similar to the Wellcome Trust’s to the next revision of the guidance, due to be published towards the end of the month.

    Hope that this helps!

    • Stephen says:

      My thanks to Alexandra — who is RCUK’s Head of Communications — for this very welcome announcement. I have updated the blog accordingly (see above).

  12. ditto. Add my name please. Might as well use this extended consultation process for some good.

  13. Mike says:

    Stephen – thanks for your continued work with these issues. You’re doing a great job of actually getting things done, as well as communicating with the audience! Please include me as a signatory in your letter (Dr M.S. Fowler, Lecturer in Biosciences, Swansea University).

    I’d like to suggest a simple solution to avoiding the IF influence in REF submissions: all articles submitted to REF must be based on a standard ‘preprint’ format (Abstract, Intro, Methods, Results, Disc, Sup Mat). No need to allow reviewers to be swayed (consciously or subconsciously) by the prestige of journal titles.

    This standard format allows Reviewers to focus on the content of individual research articles without having to adapt to the different formatting requirements across the individual submissions. And we already spend plenty of time faffing about with submissons for REF, so a some simple reformatting of a document will not add significantly to the time spent preparing for REF.

    We’re all used to reviewing manuscripts in this fashion for journals and grant proposals, so I doubt there’s an argument to suggest it wouldn’t work for REF panels. Unless of course they’re willing to accept that they do need shortcuts to assess quality, in which case the system is more fundamentally flawed than already appreciated.

    • Stephen says:

      The trouble is I no-one thinks it is really possible for REF panels to actually read all the papers submitted so I’m afraid the system is already flawed. We are rather relying on the expertise of the panel members but it is probably difficult to ensure that is comprehensive within each subject area. I would favour the use of abstracts and ‘summaries of significance’, in which the authors explains (in terms that would reach beyond the disciplinary boundary) why their results are so interesting or important.

  14. Elizabeth Stanley says:

    Please include my name

  15. Ian Gibson says:

    As a librarian who has to pay for you folks to access Journal Citation Reports I find this effort commendable. Every year when we have to renew JCR I always make the argument that we would inflict less harm on our scholars if we just piled the money allocated for it in the middle of the room and burnt it (or even better used it to fund OA APCs)…

    One less bad thing about you folks looking at IFs is at least most of the sciences are well covered by WoS (i.e. the numbers used to calculate the IF might have some loose connection with reality). Most of the folks who I’ve argued with about IFs are in subjects like Business and Education that are not covered nearly as well.

  16. Count me in Stephen,

  17. Tom Olijhoek says:

    Dear Stephen,
    I like your excellent blogs and the one above on the issue of the impact factor is again right on target.
    Please add my name to the list. as coordinator of the OKF open access working group.
    In the letter to RCUK it would perhaps be useful to include links to the presentations on the impact factor that I gave in a session lead by Cameron Neylon at Berlin10 in South Africa Title Beyond the Impact Factor: why the Thomson-Reuters Impact Factor has to be replaced.and to Cameron’s presentation with his view on what we can use after the impact factor is abandoned: “Research assessment to support research impact”
    I support all your arguments. In addition I also think that there is great need for building a new assessment framework to judge individual articles and authors that makes use of methods like altmetrics. At the same time we need to provide guidance on the quality of journals by developing a set of specific assure authors and funders of research of the quality of OA journals.
    THe DOAJ will start a project to do just this in the months to come.

    • Stephen says:

      Thanks for the comment Tom. My aim is to to keep the email simple and direct since that is what most signatories will have signed up for. However, it will point back to this blog post and I shall try to direct their attention to the comment thread.

      I fully agree with you that we need to put in place workable alternatives for the impact factor in order to escape them properly. I imagine this will be a mix of alternative metrics (focused on the paper) and procedures (e.g. asking applicants to explain briefly the significance of what they think are their best papers).

  18. Jim Till says:

    Another non-UK based scientist supports this initiative.

  19. Journal Impact Factors: No! — But Author/Article Download/Citation Metrics: Why Not?

    Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S. and Swan, A. (2007) Incentivizing the Open Access Research Web: Publication-Archiving, Data-Archiving and Scientometrics. CTWatch Quarterly 3(3).

    Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) doi:10.3354/esep00088 The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance

    Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment Exercise. Scientometrics 79 (1) Also in Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. (2007)

    • Stephen says:

      Thanks Stevan – don’t suppose you’d care to give the briefest of potted summaries of those papers to capture the key message?

      • Validating Open Access Metrics of Research Impact

        SC: “Thanks Stevan — don’t suppose you’d care to give the briefest of potted summaries of those papers to capture the key message?”

        Happy to oblige:

        Scientometric predictors of research performance need to be validated by showing that they have a high correlation with the external criterion they are trying to predict. The UK Research Excellence Framework (REF) — together with the growing movement toward making the full-texts of research articles freely available on the web — offers a unique opportunity to test and validate a wealth of old and new scientometric predictors, through multiple regression analysis: Publications, journal impact factors, citations, co-citations, citation chronometrics (age, growth, latency to peak, decay rate), hub/authority scores, h-index, prior funding, student counts, co-authorship scores, endogamy/exogamy, textual proximity, download/co-downloads and their chronometrics, etc. can all be tested and validated jointly, discipline by discipline, against their REF panel rankings in the forthcoming parallel panel-based and metric REF in 2014. The weights of each predictor can be calibrated to maximize the joint correlation with the rankings. Open Access Scientometrics will provide powerful new means of navigating, evaluating, predicting and analyzing the growing Open Access database, as well as powerful incentives for making it grow faster.

  20. I would like to add my support to this statement. Like many people here I believe that the way to rid of the increasing damage that IF does to science careers, particularly in Biology, needs to be a concerted grass roots movement. It also seems to me that the OA movement will not change the case against the IF. OA and IF can indeed be linked (and in an ideal world be VERY linked) but the reality is that they are different matters and, in many ways, it can be distracting to discuss IF in the context of OA. Now that the OA has been pretty much won, perhaps we can (as a community) adress IF and a number of issues associated with it: evaluation of scentific merit and impact and career development at the top.


    As far as I know, neither RCUK nor the REF ever explicitly counted journal impact factors. In fact, the RAE explicitly forbade submitting them and counselled against consulting them.

    However, this does not eliminate the fact that there proved to be strong correlation between the resultant RAE peer-panel’s rankings and citation counts (calculated a posteriori), as Charles Oppenheim’s many studies showed.

    The correlation was with each department’s average submitted-author individual citation counts rather than directly with journal impact factors (though a closer look would probably have found a — weaker — correlation there too, because author average citations and journal average citations are correlated too).

    It is unlikely, by the way, that this implies that the RAE panels cheated, and consulted journal impact factors after al: It just means that there really is a correlation between quality and impact — though it is probably a stronger correlation with author/article impact than with journal average impact.

    (And, as I said in my earlier comment, it could be made even stronger if one took into account multiple metrics — downloads, co-citations, growth rate, etc. — especially if one took the trouble to co-validate them jointly, discipline by discipline, against peer rankings.)

    The most relevant take-home message in this, however, is that it is — and always has been — an empirical as well as a strategic error to try to “level the playing field” for new journals (OA ones, in the instance) by asking that their journal-names be ignored, and only the quality of individual articles be taken into account:

    Journals have quality (peer-review) standards, and a track-record indicating how well they have met them. That track-record is the average quality of the articles in that journal. And it is correlated (though far from perfectly, and with different strengths in different disciplines and sub-disciplines) with the journal impact factor.

    Everyone assessing quality cannot read and re-assess every single article by every single author. They have to use predictors, like journal track-record. Users do this all the time, knowing perfectly well that an article entitled “X” does not have the same weight — does not equally merit taking the precious time to read, or, more precious still, the risk of trying to apply and build upon — if it appears in a journal with a strong track-record for high quality articles versus a journal with a weak track record for articles. (Same is true for author identity and track-record, by the way.)

    If every user knows this perfectly well, it seems not only futile but counterproductive to ask RAE, RCUK or REF to be blind to this predictive power.

    • Mike Taylor says:

      How fortunate that the FER and RCUK (and Wellcome Trust) disagree with you.

    • Stephen says:

      I agree with some of what you say Stevan but this isn’t simply about asking RCUK to be blind to impact factors. They have never had an explicit policy as far as I am aware but, having sat on panels, I know what a powerful hold it has, given the comments sometimes made about this or that applicant. I’ve done it myself in the past and even now have to check myself at times.

      The main aim here is to make one small push (I hope there will be many others — with other funders and institutions) in the direction of a cultural shift away from over-reliance on these indicators. As you suggest, part of the solution will be to bring in a suite of other, preferably article-level indicators; but it would be helpful also for decision-making bodies to think about additional means for facilitating the assessment of what people have done, rather than relying so much on the assessment of where they have published.

      • Is counting citations and downloads “relying on where they have been published”?

        • Mike Taylor says:

          Of course not. It’s relying on how much the paper has been used. Which is much closer to what we’re interested in (though still of course far from perfect).


            And what do you think I’m recommending, Mike, that it is so “fortunate that the FER and RCUK (and Wellcome Trust) disagree with” me about…?

            To consider also (alongside many other [validated] metrics) the track-record of the journal for quality-standards? Better to ignore, despite any correlation with quality?

            (Should author track-records be ignored too — hence the impact of any prior work other than just the one under evaluation? Would that not weaken rather than strengthen an already rather iffy subjective evaluation process that could use all the [validated] help it can get?)

            • Mike Taylor says:

              The part that I, and RCUK and Wellcome and the REF, disagree with is: “it is — and always has been — an empirical as well as a strategic error to try to “level the playing field” for new journals […] by asking that their journal-names be ignored, and only the quality of individual articles be taken into account.”

        • Stephen says:

          No it isn’t but it’s what lots of people do and it’s a practice that we need to discourage — by promoting alternatives.

          • I’m lost!

            If you agree that individual author and article download and citation metrics are not “relying on where they have been published” then why should they be discouraged, and what “alternatives” should be promoted?

            Or do you just mean promoting new Gold OA journals just because they are OA, despite the fact that they lack a track-record?

            • Stephen says:

              In a meeting – first of several – will try to clarify later tonight.

            • Stephen says:

              OK – I see where the misunderstanding arose: I expressed myself badly. All I meant to say was that the practice of “relying on where they have been published” — i.e. relying on impact factors — is problematic and should be discouraged. Some of the alternatives required may be other metrics, though I have reservations about any tendency to boil a scientific judgement down to numbers. At the very least, I would like to see a panel of measures used. But I would also encourage authors/applicants to come up with short summaries of their papers for people outside their disciplines who might be trying to judge the work. I was impressed by this paper from Ronald Vale, which had a hard look at how we evaluate one another.

              It’s not about promoting one flavour of journal over another. And, for what it’s worth, I am not endeavouring to make any kind of special pleading for OA journals. I agree that stringent peer review practices are vital and that good journals should seek to win business by establishing a reputation for quality. Even so, we do need to get away from journal-based metrics for individual researchers.


            @Mike Taylor: “The part that I, and RCUK and Wellcome and the REF, disagree with is: ‘it is — and always has been — an empirical as well as a strategic error to try to ‘level the playing field’ for new journals […] by asking that their journal-names be ignored, and only the quality of individual articles be taken into account’.”

            I know that you (and many others) do this special pleading on behalf of OA journals, just because they’re OA, Mike, rather than because of their proven track-record for quality standards.

            And, yes I think it’s dead-wrong, and counter-productive, both for quality and for OA.

            You may as well argue that we should all ignore journals’ quality track-records and pick where to publish and what to trust to read and use on the basis of the price-tag of the journal with the lowest subscription price.

            Authors pick which journals to publish in on the basis of their quality standards (and journals differ in these, substantially — and they take time to prove themselves). And users pick which articles to read and trust on the basis of the journal’s (and author’s) quality track-record.

            If not, then why even bother with peer review at all? Just assume that everything is equally worth your time and effort, read everything, and decide for yourself.

            (I wonder if you would pick the same strategy for food or drug safety: try it and decide for yourself…?)

            No, the price of OA to peer-reviewed research is not to renounce peer review (or to flatten it to a generic pass/fail). That’s just the price of pre-emptive Gold OA (whether for the sake of economic model or re-use licenses).

            As usual, mandating Green OA will bring us everything we want if we don’t renounce it because we insist on more than free online access, now — and end up with even less, for yet another decade.

            • Mike Taylor says:

              You misunderstand me. I am not talking about OA at all at the moment, only about how published papers are evaluated. The right way to do (as recognised by Wellcome, RCUK and REF) is by the content of the paper itself, not by the journal it appears in. That is true for OA and paywalled papers.

  22. Samuel Furse says:

    Would Like to add my support to this though can’t shake off one or two boring questions, the answers to which I may have already been discussed and I have missed (apologies if so):

    1. We all know of people who are good, and people who are not so good, and some who are even quite bad. How do we avoid treating scientists indistiguishably op paper? Clearly IFs are not the ideal way to do this, but can we think of something else? And if we don’t will there be a ‘metrics vacuum’ filled by something worse?

    2. Where is the block grant provided by RCUK for OA coming from? Is it just moved from the research pool? By extension, what is likely to happen to the amounts previously allocated for subscriptions at university level? I am concerned that the transition and post-transition might squeeze the funding available for research at a time when there’s precious little anyway.

    • Stephen says:

      Thanks for your support Samuel. To address your questions: for (1), can I point you to the comment thread under this post; but see also this one about evaluating scientists, which is base don a paper by Ronald Vale.

      For (2), the RCUK funds come straight out of the science budget. In the long term it should be possible to transfer subscription monies to pay APCs but in the short term the play is for the UK to pay an additional transitional cost. This has caused huge debate which I don’t think I need to rehearse here. If you search back through the posts tagged with ‘open access’ here, you can find out more (warning: there’s a lot of them).

  23. Sarah Greene says:

    Thank you, Steve, for your efforts and eloquence on this matter, and to many others who have responded here. I work with a nonprofit, open science initiative, Cancer Commons in Palo Alto. We work on multi-institutional, collaborative projects to advance the science of precision oncology, and to rapidly disseminate consensus insights on biomarkers and targeted therapies to physicians and patients. Outcomes are reported back to researchers and the data is used to refute, validate, and amplify the latest model. This feedback loop builds a Rapid Learning Community. However, we’ve found that the need to publish findings prior to depositing and discussing results with collaborators (=competitors) significantly blocks the process of ‘rapid learning’ – even when the grant objective is to save lives. We are thus working with experts (mathematicians, computer scientists) to develop a reward metric that will ‘score’ researchers’ contributions on a collaborative project. We’re seeking funders and administrators who will support this alternative measure of scientific contribution — so important in translational research. If you would like to learn more about our efforts, please contact me at sg (at)

Comments are closed.