The pressure of high-impact

High impact papers, h-indexes and pedigree. These are all things I have been forced to think about lately. I have recently completed two grant applications. For each of them, I had to write the cringe-worthy section on myself and how great I am (and my research and my training). I had to delineate my h-index* and pedigree – yes it is called this, even though I am not a Cocker Spaniel.

Tongue firmly planted in cheek, I wrote this bit and even had many other people read it for me. (TIP: No matter how shy you feel about this, it is a very,very good idea to get other people to read your grant applications; especially the personal track-record bit, as this section is hard to write and having others help you is imperative). A few of my more helpful senior colleges sent me their ‘track record’ sections, which blew me away; high-impact papers and h-indexes to die for. It also left me with the feeling of this is what I must do to make it.

Regardless of whether the high-impact factor journal imperative is fair or will even be used in the upcoming research assessment framework (REF); it certainly feels like high-impact papers are of the upmost importance. You can feel it in the water and I suspect there are many academics out there who truly believe that 40 papers in Nature are the only mark of a good research career.

I have really enjoyed many recent blogs by senior, established academics out there about the problems with impact factors, the REF and h-indexes. Athene Donald, Stephen Curry and Dorothy Bishop have all written about this extensively and thoughtfully. Which is soothing and I find myself nodding my head in emphatic agreement.

But I am an early career researcher. I still feel the imperative to try and put my papers in high-impact journals and that that will *make* my career and that these papers will send me stream-lining past the ‘Track record’ assessment barrier for any grant funding I might apply for. Discounting my PhD research, in my own research I have a relatively few papers in what some would consider *high impact* journals; and I would be lying if I said I wasn’t worried about this. Even though in my particular case the least highly-cited of my publications are in higher impact journals; I still feel the pressure to try and publish in Nature, Science and PNAS. I have no solid evidence for this – only anecdotal evidence at best, but it seems to me that the people with the most high impact papers are the most likely to be permanently employed and funded.

I think to be a healthier, happier researcher I would just disregard this pressure and publish all of my research only in lower-impact journals as it is quicker and easier, in my opinion. On the other hand, writing up work for a higher-impact more general publication can be really exciting as – at least for me – you have to work much harder to place your work into a wider scientific context for these journals which is fun and exciting, even if they do get rejected in the end. I think it is worth noting that not all work published in high-impact journals is horrid, there are good papers in Nature, Science or PNAS – which are general enough to be interesting to all.

Rightly or wrongly, I am under the impression that high impact does matter and it matters very much and that high-impact publications are most important kinds of paper to have when looking for funding, plaudits and the most successful scientific career – and a permanent job.

*an h-index is a measure of citations versus number of publications. For instance if you have 25 publications all of which are cited 25 times your h-index is 25. Also if you have 500 publications where 25 of them are cited more than 25 times your h-index is 25.

About Sylvia McLain

Girl, Interrupting aka Dr. Sylvia McLain used to be an academic, but now is trying to figure out what's next. She is also a proto-science writer, armchair philosopher, amateur plumber and wanna-be film-critic. You can follow her on Twitter @DrSylviaMcLain and Instagram @sylviaellenmclain
This entry was posted in h-index, high impact publications, REF and tagged , , . Bookmark the permalink.

32 Responses to The pressure of high-impact

  1. Simon Leather says:

    I totally agree, the pressure to publish in high impact journals is extremely bad for science in general and biological sciences in particlular. It has resulted in a terrible imbalance of disciplines in UK universities, which has a huge impact on the education of our future biologists and has led to the virtual extinction of some whole organism, applied disciplines in the so called research intensive universities. Entomology, plant pathology, nematology, soil science and weed science are all under huge threat, yet essential subjects for the future of global food security. Journal imapact factor has a lot to answer for.

  2. Steve Caplan says:

    I’ve spent considerable energy thinking about these issues over the last 20 years, and here is my take–albeit from the US, which may differ from expectations of academics in the UK. First, as I’ve made clear in blogs and comments to blogs in the past, I adamantly oppose non-peer reviewed publishing in the biomedical/life sciences. This is because it’s too easy to take liberties and have data published that would potentially not stand up to scrutiny, (that happens too often anyway), and researchers often do not have the time to read more than just the final conclusions.

    With regard to the ‘high-flying’ journals that are the primary issue of the blog: my view is that regardless of impact (or perhaps indirectly related to impact), each field had a series of journals that publish quality science. By my definition, that means peer-reviewed (by at least 2 reviewers and an editor), with rejection of papers that do not advance science, whose conclusions are suspect or not well warranted based on the data presented, and so on. In addition, these types of journals look for science that moves a field or a “story” forward–rather than being a poorly linked collection of observations (which may be of some use to the scientific community, but do not lead to real new insight).

    I believe that publication in these types of journals is always excellent, regardless of what their impact factor is. Now, regarding the very high-tier journals that you have noted (excepting PNAS, for which I have recently declined to review, noting that I refuse to review for journals that officially have different standards of acceptance for some authors compared to others)–I believe that it is worthwhile trying to “market” or “sell” a body of work as being of sufficient broad significance for the general scientific public. However, my perspective is that there is NOT a difference in the quality OR the overall broad interest of papers that ultimately make this cut, compared to those published in the less exclusive, quality-based journals that I mentioned above. My own anecdotal dealings with the high-tier vs the ‘regular high-quality’ is that the review process differs very little, both in the demands and expectations of the reviewers, and the overall science. The major difference is the PERCEIVED broad appeal–which is usually extremely subjective, and often made primarily by professional editors thinking about trends and potential citations.

    In my case, I find this somewhat humorous. Papers that have been rejected by the ‘top-tier’ have often been published in journals that go on to have 100 or 200 citations. This exceeds the average number of citations in the ‘top-tier’ by 5-10 fold. So does it mean that if it was editorially rejected due to perceived lack of general interest, that the editor erred?

    So my advice to a young investigator–at least in the US system (as I realize that the UK may have gone to extremes with these number games)–is to shoot for the top. But in the baseball analogy–if you can’t hit a home-run, getting on base with a solid hit is perfectly respectable.

  3. cromercrox says:

    Your Favourite Weekly Professional Science Magazine Beginning With N (DISCLAIMER – I am an editor at that journal) receives about 11,000 submissions a year of which 7-8% are published.

    In my view, which is purely a feeling, backed up by no evidence I’m aware of, journal impact is related to high submission and high rejection rates as it is to citation analysis. Having more submissions means that editors have more choice about what to publish, and will be able to select from more submissions of higher quality. Keeping rejection rates up maintains that quality.

    As an editor who deals in whole-organism biology, I share Simon Leather’s view to some extent. In some fields it takes years to accumulate the citations that papers in other fields accrete in months, but these are field-specific characteristics with little connection to general interest.

    Publication is not a complete lottery if journals and editors have a very clear idea of which kinds of submissions suit the editorial style of the journal. So, a technical paper that will be essential to a group of busy researchers in a particular area, but of limited applicability elsewhere, might not be a candidate for Nature but will attract many citations if published in the appropriate specialist journal.

    • Steve Caplan says:

      With all due respect, it certainly does seem to come out pretty much a lottery–at least in the fields I am familiar with. I think particle physicists, environmental ecologists and paleontologists will generally be unable to discern between the “broad interest” of one basic cell biology/biochemistry/molecular biology paper or another. And I would go further and say that even within more narrow confines, a researcher working on membranes and receptors is unlikely to appreciate (or even read) studies on transcription factors.

      From first-hand experience, when I see an editorial rejection of a manuscript proposing a mechanism to explain a long-standing general mechanism in my area of research rejected for “lack of general interest” (with a very general title as well)–and at the same time scroll through papers with titles (this one made up, but you’ll get the general idea): MAP kinase kinase kinase phosphorylates protein X on serines 461 and 633 under conditions of mild but not severe hypoxia…”—-Well, it makes me wonder what broad general appeal is really all about. Spin the roulette wheel, please.

  4. Stephen says:

    Steve’s comment is entirely reasonable but it does not address Sylvia’s concern that her performance is likely to be judged on the impact factors of the journals where she publishes.

    Wellcome’s open access policy explicitly disavows the use of impact factors in the assessment of funding applications which is a move in the right direction. They are also supposed to be ignored by the REF, although Jenny’s survey (linked to in the post) found that this was not the case — the practice is too engrained.

    I hope custom and practice might yet be shifted. I was encouraged by this recent editorial in Nature Materials which is now linked to in their instructions to authors. Hopefully more journals will follow suit but there is still a very long way to travel.

  5. You are correct Stephen, I think this the best way to put it ‘too ingrained’ – obviously I prove that rule as this is what I feel is so worrying. while on paper many times people say ‘don’t worry’ (which is what I like about Steve C’s comment) – I also see that people getting the best jobs appear to be those with a raft of high impact papers. but again this is just my observation.

    While I think those working in academics know you can have perfectly respectable and even amazing papers anywhere. I still too often feel that the underlying tone is “Yes, but if you were really good you would have 40 high-impact papers….” hopefully I am wrong

  6. Jim Woodgett says:

    There is a fundamental friction between what scientists actually do and forces that are intent on measuring this productivity (typically funders, but also employers). The nature of science (heh) is that peer-reviewed publication is the primary accepted form of output – even though much impact is not necessarily reflected by this process. Publication is the accepted and generally effective means of communicating science to others in a manner that should allow understanding, evaluation and repetition. However, the sociological nature of science (i.e. that it is conducted by wet, organic sentients living in a competitive and highly uneven opportunity environment) means that there is constant pressure to advertise, push and even subvert the scientific “method” (term used in the general sense). Moreover, while governments of all political stripes appreciate the linkage between scientific activity and good societal outcomes (e.g. economic, high tech, healthcare, etc), they (as are we) are clueless in the mechanistic details of that linkage. Hence, their need for measurement.

    [As an aside, we scientists often take advantage of that mechanistic ignorance in pitching big ideas that wouldn’t be given 2 minutes in a conventional business setting. Hiding behind the magic unknowables of science can be useful.]

    The problem is that these measurements are, at best, lagging indicators and at worst distorted, game-able and inaccurate surrogates. However, there is a reason why journals such as Nature, Science, Cell, etc. are held in esteem which is because, as Sylvia notes, they are very tough to get into. Let’s not generalize by disparaging these “glamour” journals and their contents. Predicting future impact is clearly as hard for editors (even those with a mandate to bounce 9 of 10 things they see) as it is for anyone else. One can argue that their journal impact factors are inflated by their high profile (pulling in more cites because they have a wider audience) but this is precisely why they attract so many submissions in the first place.

    I am not defending JIFs per se (quite the opposite, it’s a terribly flawed and abused “measure”) but it surely captures some reflected truth. h-index and article-level citations are arguably far better – at least if one accepts the value of citations as a measure of impact – but they lag. This is an anathema to the funders/administrators who want to track and predict performance over short periods. JIF does that – for all of it carbuncles. As scientists, we need to educate administrators about the flaws of these measures and defend the basic nature of science. To be effective, it can never be predictable. But we also need to recognize that while home runs are more thrilling than base hits, no baseball manager fires the batter with a decent ERA regardless of the number of homers.

  7. I think Jim Woodgett is far too easy on impact factors. The most basic fact about them, which has been known for at least 15 years now, is that there is no perceptible correlation between the number of citations that a paper gets and the impact factor of the journal in which it appears. Some of Woodgett’s statements seem to contradict that basic observation.

    Getting into a high profile journal is better at predicting the retraction of the paper than it is at predicting how many citations it gets.

    In my experience, getting into Nature is primarily dependent on the trendiness of the field, combined with a great deal of patience. Certainly my best bits of work have appeared in specialist journals, Nature is more useful for brief preliminary notes than for thorough and complete bits pf work.

    The pull of Nature etc distorts and harms honest scientific endeavour in my opinion. The very welcome advent of open access provides an opportunity to get rid of this harmful hegemony. I hope that soon we’ll all publish ourselves, perhaps in something like ArXiv. That would have the double advantage of saving a huge amount of money, and of forcing people to read a paper if they wanted to know how good it was. Nature could persist as a news magazine (that part of it is quite good).

    • Jim Woodgett says:

      David, I didn’t say that the impact factor correlates with the citations attracted by any given paper (and am on record that I think JIF sucks as a measure). That is indeed the primary source of misunderstanding and distortion of JIFs by protagonists insofar as it averages the accrued citations within the journal rather than any specific article (which requires lagging article-level metrics). Much sensible analysis has been done on this (as noted on your website and others). There are other issues too but there is also an inconvenient problem that the reputation of a journal correlates rather well with its JIF. Regardless of that as a self-fulfilling justification, impact factors remain, with some exceptions (e.g. see the Acta Crystallographica example here: http://www.nature.com/embor/journal/v14/n3/pdf/embor20139a.pdf), reasonably stable over time. Hence, there must be some inherent signal of attraction of papers that are destined to be highly cited. That the impact factor is often abused and misused by people who do not understand/ignore its flaws, is an unfortunate fact that I don’t condone. With a dearth of short term measures in the face of a powerful administrative demand for them, it is hardly surprising that JIFs are leapt upon, flaws and all, the lazy justification being the level of journal esteem being somewhat in line with its IF.

      What we can do is educate the abusers of impact factors (a strategy that has achieved some success) and also to push back on the inherent inanity of trying to gauge scientific value based on superficial, short term metrics – an enormous challenge.

    • cromercrox says:

      As an editor of Nature I can’t let this go unchallenged. Nature judges a field to be trendy if we hear scientists talking about it, and when we get a lot of submissions in a field. The trends are set by the field – we, being journalists, are therefore obliged to highlight them. However, we are always looking for new things that aren’t trendy, but which represent genuinely interesting discoveries, and highlight those, too. Sometimes they set a trend (in which case you notice and accuse us of being ‘trendy’) and sometimes they don’t (in which case nobody notices at all.) I think you have a very biased, bilious and if I may say cynical view of Nature. You also accuse us, or our ‘pull’ of ‘harming honest scientific endeavour’. Are you accusing us of being dishonest, or peddling research we think is fraudulent?

      • I’m not sure that the undoubted advantage of trendy areas helps quality. In the early 1980s, almost every paper with single ion channels in it got into Nature. I know because I published several myself. Some of them, in retrospect, are utterly trivial (indeed even at the time, some were distinctly unexciting). One of them, in 1981, was not too bad, but the full version of that paper, in the Journal of Physiology in 1985, has far more citations.

        It’s part of a more general problem of ‘science by buzzword’. Just look at the large amount of money that Research Councils poured into systems biology. Perhaps, in the future, that will pay off, but it hasn’t yet.

        The question of ‘general interest’ is pure fiction. As pointed out in these comments, almost everything published in every journal is highly specialist. I do wish that Nature etc would recognize that.

  8. Konrad says:

    Most commonly used measures do, in fact, lag – but I just want to point out that there are statistical ways of dealing with the lag: predictive indices.

    http://www.nature.com/nature/journal/v489/n7415/full/489177a.html

    And I think that top journals are pretty hard to game – because everyone is trying they are pretty resilient. They also know that if they ignore new trends and discriminate for fashionable fields they will lose their standing. Consequently, getting into those journals does actually predict higher future indices.

  9. Konrad says:

    Oh one more point. Pedigree should be given a negative weight. If two scientists are equally prolific, but one has a better supervisor, the one with the poorer supervisor is associated with more positive predictions. There exists political regression to the mean.

  10. Thanks for the thoughtful post, Sylvia.

    What do you think about “publish, then filter” experiments, like the one Mike Eisen is conducting on his blog/lab website? http://www.michaeleisen.org/blog/?p=1304

    • I don’t know entirely . Because I do think things need to be peer reviewed – but this is sort of similar to ArXiv – where most of those papers are peer reviewed afterwards, though it must be said there is a lot of crazy stuff in some of the publish first places – I guess that goes for proper peer reviewed journals too however….

  11. i think all of these things are very complex issues. I don’t know that h-index is that much better, it is more of an age index – also it goes sky-rocket when you publish lots of bad stuff. That being said I guess if you are comparing like-with-like it kind of works, but even then not really completely.
    I have read some great articles in Nature and Science (not Cell out of my field entirely) so I do have sympathy. I am covetous. I do think reaching a wider audience is very important and Nature/Science seem a pretty good way to do this – even if it doesn’t mean more citations. However I do think there are many people who do terrific work who never publish in these magazines – even refuse to. I think the main point of this post though was as a younger academic – CAN you refuse to and not risk your career. It also is just kind of scary because even just trying doesn’t work and as David says takes a ton of time.

    • I agree, The H-index is really no than more a vanity project for the elderly, As a way to assess young scientists, it is worse than useless.

      The extent to which it benefits your career to have a letter in Nature is not really know, As far as I know, nobody has tried to look at the question properly. My guess is that depends rather strongly on where you are applying for a job. The better the lab, the less it will matter. A good lab will read what you’ve done and judge you on that.

      My own experience suggests that Nature papers don’t matter nearly as much as most young people think they do, when it comes to getting a job. But sadly I must admit that this is a bit of a guess.

      It’s mildly encouraging that the REF has banned the use of impact factors for assessment (though many people seem not to believe that they’ll stick to it). It’s also good that they limit the number of papers to four per group, regardless of group size. That rewards long and thorough papers, of the sort rarely found in Nature. It’s a great shame that they couldn’t be dissuaded from the impact nonsense. As everyone else says, you just have to hold your nose and write something.

      • I think you are spot-on with this ‘ they don’t matter nearly as much as young people think they do’ – it’s paranoia I think a lot of times. At least I hope so, and I actually think it is a distraction from doing good work to worry about this too much.
        and that can be a problem …

  12. I am not anti-Nature, not at all – hope my comments didn’t reflect that… just think this isn’t the only way and I would not want to be judged just on that alone…..

  13. @professor_dave says:

    What really matters is the impact of your work, not the impact of the journal it is published in. How many times is it cited? Who uses it to develop new technologies and patents etc? In the future, altmetrics will collect article-level information how many people downloaded it, who tweeted about it, who shared it, did the media report on it etc. The only fly in the ointment is that we still all pay more attention to papers in Nature and Science, even though many of them are nothing special (or even wrong). This is because we like reference lists on our own papers to be stuffed full of Nature and Science to make our work appear to be in an impactful area. Roll on individual article metrics and the end of the generalized and largely meaningless mpact factor.

    • Paul says:

      But even at an article level. Highly technical, specialist papers rarely make news stories or even rack up huge numbers of citations. However, this is where the real science is – not in the flashy, phenomenological papers that are easier for the reader to digest and are often favoured even at article-level metrics.

      In my opinion, all metrics are floored. The only way to truely judge the merit of a paper is to actually read it (radical idea, I know!) and assess the quality of its contents that way!

  14. Paul says:

    I think this is all a symptom of how busy people are and the desire to boil the complexities of people’s research and academic careers down to a single (or small number) of (overly) simple metrics.

    Noone reads your papers anymore to assess their scientific quality, they just look at how many times it’s been cited – which is a long way from being the same thing. I know from my own papers that their citations don’t correlate with my own view of the novelty and quality of the work within them!

    Particularly when you’re early career, getting papers in high impact journals can also been down to the luck of the reputation of more senior colleagues you are publishing with too, as well as other non-scientific factors.

    The fact that H-index varies with time makes it a poor indicator of success, it’s linked more to seniority and disfavours younger academics. There’s no simple fix. Even looking at dH/dt is dependent on career length as I would expect an academics H-index to follow a sigmoidal trend.

    We can sit and complain about these things, but in truth we just have to get on with it and play the game by the rules as they are currently written – no matter how unscientifically sound these rules are when scrutinized.

    • Last time I read about h-indices, it became clear that mine would carry on going up even if I never published anything else ever again, simply because my papers already in the literature would rack up a few more citations over the next few years.

      I reckon a ‘quality index’ that would keep on rising even if you were inactive or retired (or even dead) has certain rather obvious flaws as a way of ranking people for quality.

      PS And yes, since you ask, I have worked out my own h-index, I’m weak. I couldn’t stop myself.

      • you can just look it up I think on Google scholar or your favourite search engine – I know this actually because I keep having to write mine in grants – and people ask me for it – so I just have it to hand (sad) and I also know if one more person would cite one of my papers one more time my entire h-index would go up a whole point. I don’t want to know stuff like this….

  15. Here is an example of what happens when you use phony metrics
    http://t.co/VHVqRVVvGv

    They are a direct incentive to dishonesty.

    • That is a bit extreme and I would argue doesn’t happen in most cases …. there have always been people who do this; create data and perpetrate fraud – they say that fraud is on the rise – but I don’t know if i believe that – I just think maybe there are more people doing more research – so the number of fraudsters increase proportionally (sadly) …..

      I think the more likely scenario is the more pressure the more sloppy people will be – which is not necessarily connected to pre-meditated fraud; which I think is a different kettle of fish.

      I think it’s much more likely for people to skip important details leading to false conlcusions rather than just invent data; which I think is relatively rare….

  16. Jim Woodgett says:

    So we’re agreed, various metrics are flawed (I certainly don’t look forward to a world where # of retweets or Facebook likes are used as a surrogate of scientific impact – except on impressionable, over caffeinated kids). It’s easy to critique the various metrics but they exist because of a demand – not from scientists but from funders and administrators. These agents are not even remotely likely to wish to read the papers or interview the authors. Further, they come up with ever more creative ways to combine and popularize metrics. So we either come up with a better means for them to assess quality/value/whatever or we point out the inherent flaws and ignore them. I fear the latter is at our peril because they have bigger guns and more time.

  17. Spiny Norman says:

    “I think you have a very biased, bilious and if I may say cynical view of Nature.”

    Well, that’s, just, like, your opinion, man*.

    “You also accuse us, or our ‘pull’ of ‘harming honest scientific endeavour’. Are you accusing us of being dishonest, or peddling research we think is fraudulent?”

    No. We’re accusing you of distorting the evaluation of experiments and ideas through a misplaced emphasis on impact and perceived novelty. We’re accusing you of harming scholarship through artificial page limits, dozens or scores of panels of buried Supplementary Information, and minuscule bibliographies that force under-citiation of relevant prior art. We are accusing you of slowing progress across many fields through artificial bullshit embargo policies, and we are accusing you of preferential treatment of well-known or well-pedigreed investigators. We are accusing you of holding back publication of papers, often for a year or more while requesting vast amounts of new and often pointless experimentation.

    Science would be a better enterprise overnight if the “big three” journals suddenly ceased to exist. You are harming the most important endeavor ever launched by humanity.

    J’accuse.

    *http://www.youtube.com/watch?v=pWdd6_ZxX8c

Comments are closed.