In defense of journal hierarchy

Plagued with an unbelievably busy schedule, I have been a mostly passive follower of the excellent dialog that has resulted from several outstanding blogs on the peer review system, many of them “high impact blogs” by my esteemed colleague, Dr. Stephen Curry.

Just this week, after an extremely concerted and exciting process, my laboratory has submitted a manuscript spearheaded by a senior graduate student to one of those “high impact journals.” Now, granted, we are currently in limbo, and the manuscript may or may not even be sent out for review, but this of course raises the question as to why we chose to do so. Why not merely submit the work to an open-access journal that accepts solid and controlled science?

One might argue that it’s all vanity; the fame and the glory of having one’s name affiliated with high tier/highly respected journals is a major motivator. I won’t deny that this isn’t a part of it–there is an element of competition that clearly serves as part of the driving force. But is it more than that?

I would argue that it is. As I have briefly alluded in the comments to one of Stephen Curry’s recent blogs, it’s not the impact factor per se. In my field, there truly is a major difference in papers that are published in journals of differing tiers. For example, at the bottom of the scale, in some of the lowest ranking (unheard of) journals, the quality of the science is suspect, and its not always clear whether a researcher can see an abstract on the PubMed and actually believe the conclusion(s) or repeat them in her/his own lab. Occasionally outstanding papers can be found in such journals. Frequently the papers are a mixture of controlled and uncontrolled experiments, making it difficult to sort the wheat from the chaff.

A cut above these journals are ones where papers are published that contain absolutely rock-solid experiments. There is no concern as to the ‘repeatability’ or accuracy of the work done. On the other hand, the researchers who have done the experiments have not necessarily chosen a sequence of experiments that sheds a lot of information or new light on a problem. Sometimes the researchers are skirting the difficult questions, which are harder to answer. Other times, they are propelled in a certain (and not necessarily beneficial) direction out of inertia, or their ability to technically carry out certain experiments. For these reasons, the papers in such journals (generally speaking, of course), while depicting scientific experiments that are accurate, don’t necessarily provide a lot of helpful information to scientists in the field. They should be published, because that information may be useful to other researchers, but their publication in a journal of less repute than the higher-end journals marks them as reading that might be less essential. This is especially true in an age where scientists find it difficult to keep up, even within the narrow confines of their own fields.

At the other end of the spectrum are the most respected journals–those that showcase rock-solid experiments, but usually in the context of a model that sheds new light on a process or mechanism. Such papers are a must for those in the field to read, and allow other researchers to leapfrog forward and move beyond our current understanding of the science.

To be fair, I think that there are actually two levels of such journals: 1) journals that will accept such papers irrespective of their perceived ‘impact;’ and 2) journals that accept such papers only if convinced that they have potentially ‘high impact for a broad reading audience.

In my laboratory, I insist that my co-workers aim for these latter two journals, whose names are familiar to all of us in the field. I freely admit that acceptance into the latter style of journal can be extremely arbitrary, and in many cases can depend on ‘professional editors’ and whether they deem the findings of broad enough significance for their reader audiences. This, of course, is laughable; just look at some of the titles in these top-tier journals: “The dephosphorylation of serine 653 and 497 of protein XXX leads to its nuclear retention and deactivation of transcription factor YYY in a GTP-dependent manner.” Now that’s a made up title, merely intended to illustrate that with today’s level of scientific speciality, nobody outside the field will read it. They may read a “News and Views” style explanation, but certainly will not have the time to read the paper itself.

So why do we even bother aiming for these very high caliber journals? That’s a better question: distinguishing between the high and very high journals, as opposed to open access vs high tier. Here the answer lies in the system–one that it’s not possible to fight alone. As much as I have staked my career on excellent journals (but not the ultra-high tier ones) for the most part, the ultimate respect–translated practically into grant funding, etc.–comes from also having a few of the “type-2” high tier journal publications. This is despite my personal view that the average paper published in these journals is not necessarily any better than those in the journals we more frequently publish in.

So we await judgment in limbo, for now.

About Steve Caplan

I am a Professor of Biochemistry and Molecular Biology at the University of Nebraska Medical Center in Omaha, Nebraska where I mentor a group of students, postdoctoral fellows and researchers working on endocytic protein trafficking. My first lablit novel, "Matter Over Mind," is about a biomedical researcher seeking tenure and struggling to overcome the consequences of growing up with a parent suffering from bipolar disorder. Lablit novel #2, "Welcome Home, Sir," published by Anaphora Literary Press, deals with a hypochondriac principal investigator whose service in the army and post-traumatic stress disorder actually prepare him well for academic, but not personal success. Novel #3, "A Degree of Betrayal," is an academic murder mystery. "Saving One" is my most recent novel set at the National Institutes of Health. Now IN PRESS: Today's Curiosity is Tomorrow's Cure: The Case for Basic Biomedical Research (CRC PRESS, 2021). https://www.amazon.com/kindle-dbs/entity/author/B006CSULBW? All views expressed are my own, of course--after all, I hate advertising.
This entry was posted in research, science and tagged , , , , , . Bookmark the permalink.

32 Responses to In defense of journal hierarchy

  1. DrugMonkey says:

    “rock-solid” in Glamour Mags? This is a joke, right? Hello RetractionWatch. Also, see “inferential statistics” and “identified error bars”.

  2. Pep Pàmies says:

    Having talked to many scientists in a variety of fields, it seems to me that what this post argues is a really common view.

    I wanted to point out that for some top-tier journals, more than for broad appeal manuscript editors look for a remarkably high degree of advance over published work (in fact, in many cases a remarkable degree of advance would make the paper of interest to a broader audience). The degree of advance can come in various flavours (conceptual, fundamental, methodological and/or practical). Assessing the combined degree of advance from each of these flavours is in my view the best proxy for a manuscript’s potential impact.

    Disclaimer: I am a professional editor at Nature Materials.

    • Steve Caplan says:

      Thank you for pointing that out–a high degree of advance is certainly a justifiable rationale (although sometimes subjective)–even from a journal editor…

      • Pep Pàmies says:

        Yes, it is subjective, but only for a relatively small percentage of submissions. In my experience, for Nature Materialsabout 80-90% of papers are clearly below or above the bar that editors and reviewers have set for the journal, and therefore I reckon that the majority of readers of the journal would agree with the decision. The editors’ experience goes into minimizing the degree of subjectivity that goes in the decisions for the remaining 20% or so of manuscripts.

  3. Jim Woodgett says:

    I’m not a fan of JIFs for all of the reasons elaborated by Stephen Curry but there is a reason most scientists have a good sense of the relative JIFs in their field. When there are 3000 odd biomedical journals, who can scan them all? We look at journal reputation as a means to filter the material -especially material which is not directly within our own comfort zone. There is too much me-too work being published with incremental findings. Of course, experimental reproducibility is important, but not as a means to its own end. Instead, we build on the findings of others and reference them. That is how reputations are built. Over time, the publishing unit has become the only form of currency we value and, like bankrupt governments the world over, we think we can just print more papers as a way to prop up our productivity. Instead, everything is diluted and we migrate to gold (and silver) standards.

    Of course there are faults with undue attention being paid to the highest “impact” factor journals (especially in promotion and recruitment) but lets recognize that these journals do represent an inherent and understandable desire by scientists to publish to the widest audience. If these celebrity journals are so offensive, then why do we keep sneaking peaks hypocritically at the checkout?

    Doctor, heal thyself…. (guilty as charged).

  4. Stephen says:

    I think Pep is correct in identifying this as a common view. But it still troubles me.

    Steve, what gives the work of your group value is the trouble you take to ask important questions and to follow them through with artful and rigorous experiment. It’s importance is within the paper, not in the journal. I imagine that when it is presented at conferences before publication, no-one in the expert audience has any difficulty identifying its quality. They don’t need to glance at the cover of the journal to determine that.

    What you have said implicitly rather than explicitly is that your funding, your career and those of your co-workers are more dependent than any of us would like on the name of the journal in which the work is published — an easy proxy for the impact factor. I certainly don’t blame you or anyone for that. I’ve done it myself. But the whole point of my post was to question this system. If we all continue to play the game, we will will be bleating just as bitterly about the iniquitous effects of impact factors in 20 years’ time. I feel that only by breaking the stranglehold of the impact factor — by thoroughly discrediting it — can we escape to a culture where the work is judged properly, rather than lazily on the basis of false measures. Difficult to achieve, I know, but I’m sure you would succeed just as well in such a world.

    • Pep Pàmies says:

      I agree with Steve that the impact of a paper should be judged by its content and not by the cover of the journal it appears in. However, we all have only so much time.

      Given that on average journals with higher IF are more widely read than journals with a lower IF (because on average high-IF journals publish papers that end up having more impact than those published by low-IF journals), it is only natural that researchers want to send their best work to journals with high IF because the paper will have higher chances of being noticed earlier by a larger amount of peers. This is mostly a consequence of too many papers being published and too little time for keeping up with them. A journal’s name is a quality signal that is helpful (alongside many other indicators) when deciding what to read, and what to read first.

      Nevertheless, I always argue that no matter where a piece of work is published, if the work is of high importance it will get noticed. However, publishing in well-read journals increases the chances of the paper being noticed by many and earlier. In general, the higher the prestige of the journal, the larger the exposure of the papers it publishes.

      • Stephen says:

        You are almost on the point of contradicting yourself in your final paragraph…

        I do wonder, given the rise of social media (e.g. the increasing numbers of subscribers to services like Mendeley), whether work really needs to be published in the ‘best’ journals, to have the best chance of being noticed. If communities are talking about it, a paper will get noticed.

        Also a journal’s name is an unreliable signal of a paper’s quality. But we’ve been down that route…

        • Steve Caplan says:

          Stephen,

          While you are absolutely correct that a journal’s name says nothing about an individual paper’s quality, it does say something about the statistical likelihood that a given paper in that journal is of quality. We can all cite cases of terrible papers in high tier journals, and occasionally, vice versa. However, it’s clear that in higher tier journals the likelihood of a paper being of better quality (more advances, better controlled, more complete and so on) is much greater. As pointed out above (especially by Jim), with so many journals out there, I can’t afford to waste time weeding through the lower echelon journals in search of pearls. Most researchers can’t handle this. At the same time, I’m not implying by any means that an impact factor of 5 or 8 or 12 necessarily makes any difference. But I am saying that there are journals that value quality and have a reputation of generally publishing good papers. Those are the journals I read, and those are the journals I try to publish in.

          On the lighter side, as Woody Allen once said “I wouldn’t join any club that would have me as a member.”

          • Jim Woodgett says:

            I think you’ve hit the spot. We don’t actually discriminate JIFs below a certain level. There’s the 40 or so primary journals above 15 but there’s a whole slew of excellent journals between 5-15. There is clearly some level of elitism and desire to publish in those top 40, the level of distinction between the members of the next “tier” is much lower. While we are, as a whole, taking JIFs too literally, I don’t think most people have a problem identifying the most appropriate journals for their work and they certainly don’t bother with decimal points. The abuse is really at the level of promotion and assessment committees who are too lazy to read the actual contributions and instead prefer to judge vicariously through the artificial impact factor. This is an egregious flaw that will, unless corrected, have harmful influence on how science evolves.

            We also know that there are major confounders to JIF. The more reviews that are published, the higher the JIF (which is why I used the term “primary” along with the highest tier 40. We separate reviews from primary papers on our CVs for good reason. So why do we conflate JIF with the best science?

            I use metrics to help in evaluating my colleagues, but not as short-cuts for reading their work which is the main route to understanding their contributions. Metrics are a tool and one facet of data. They are are their own, to misquote Douglas Adams, “as insightful as 42”.

          • Zen Faulkes says:

            “While you are absolutely correct that a journal’s name says nothing about an individual paper’s quality, it does say something about the statistical likelihood that a given paper in that journal is of quality.”

            Given that impact factor is more tightly correlated with retraction than citation, it’s not clear to me whether journal name in practice signals papers with high quality or low.

            “I can’t afford to waste time weeding through the lower echelon journals in search of pearls. Most researchers can’t handle this.”

            Indeed. This is why I use technology to do things like send me alerts about journal articles based on keywords. If anyone reading this uses the “Who has the time?” argument but has not looked at, say, Google Scholar Updates, then we’re not on a common ground for discussion about how long searching the literature takes, and the relative usefulness of journals in making those searches.

          • Steve wrote:
            “While you are absolutely correct that a journal’s name says nothing about an individual paper’s quality, it does say something about the statistical likelihood that a given paper in that journal is of quality. We can all cite cases of terrible papers in high tier journals, and occasionally, vice versa. However, it’s clear that in higher tier journals the likelihood of a paper being of better quality (more advances, better controlled, more complete and so on) is much greater.”

            I’d like to see some evidence to back that assertion up. We’re in the process of writing a review article on exactly that topic, and the data in the peer-reviewed literature that we have been able to find seems to suggest that this notion of more quality papers being published in hi-rank journals is a figment of our imagination that doesn’t hold up to scientific scrutiny:
            https://docs.google.com/document/d/1VF_jAcDyxdxqH9QHMJX9g4JH5L4R-9r6VSjc7Gwb8ig/edit

            Do you have any citable evidence for your claim, or are you merely stating your subjective impression?

          • Steve Caplan says:

            I have never bothered to look for any statistical evidence; being in my field for 20 years, sitting on 3 editorial boards and reading thousands of papers I do not need anything further.
            To be sure, I am not saying that a specific impact factor number is the important measurement. In my field there are journals that go up and down. But everyone knows the quality associated with those journals and no one cares if a journal is a “9” one year and a “5” the following year.

            The point is that there are, in my subjective opinion (and joined by everyone in my field) distinctions between at least 4 levels of journals (4 to simplify):
            1) lowest echelon journals whose scientific validity is sometimes suspect. I have been asked to review for some of these journals in the past and since many earn money by the number of publications they are reluctant to reject papers no matter how poor the science is
            2) low echelon journals. Papers can be hit or miss, but rarely advance the field much. Often papers end up here after having been rejected by several other more highly tespected journals. Even if the experiments are well controlled and well done, commonly they are collections of data that don’t advance/support/negate a scientific model
            3) Respected journals. In my view 8/10 papers are very well done and advance the field. Occasionally undeserving papers sneak through, but this is generally rare. The papers usually propose or support a given model and are conceptual advances
            4) The top tier journals. There isn’t always a clear break between these and the respected journals. Here it isore subjective about the advance or breakthrough that the paper may or may not make. I would be less inclined to bet that there is a major statistical difference between these latter two categories.

            Perhaps for your review you might want to consider looking at the publications of Nobel prize winners in biomedicine over the last 30 years. I’m willing to bet that the important papers by these researchers were all published in the latter two categories that I noted.

          • Steve wrote:
            “I have never bothered to look for any statistical evidence; being in my field for 20 years, sitting on 3 editorial boards and reading thousands of papers I do not need anything further.”

            To be fair, I shared your exact sentiment until I saw the data. People have been searching for evidence of this subjective notion for decades, but the data always indicated otherwise. Science is full of falsifications of beliefs that “do not need anything further”. At what point would you be swayed by the evidence?

            For instance, if indeed the tiny number of papers making up Nobel prizes were only published in the top 1% of all journals (which the data make seem highly unlikely!), you’d still have to read about 300 journals (given currently 31,000 scholarly journals) together publishing approx. 20k articles per year, or you’d miss some of them. Clearly, even if you were right, such a system would at best be useless. Given the incentives provided by such a system, the best case (useless) becomes unattainable.

            So again, if we did this test as you suggest and it turned out the way you say, it would not change the conclusion. Moreover, all the remaining 99.99% of all science (the “chaff”?) would still be more or less equally distributed across all journals, so even if journal rank would work as a filter for Nobel-caliber work, it would still be useless in total.

            Do you have a test that hasn’t been done before that we can propose? After all, the reviewers called our conclusions, that journal rank is useless at best, “nothing new”. Which kind of data would make you start to reconsider your subjective notion, if it existed?

          • Ian Borthwick says:

            Excellent post, Steve – think the club membership quote was Groucho Marx.

          • Steve Caplan says:

            Ian,

            You are correct–Groucho it was. Woody Allen borrowed the line…

        • Pep Pàmies says:

          “Also a journal’s name is an unreliable signal of a paper’s quality. But we’ve been down that route…”

          Yes, but I was not referring to a single paper, but to a large set of papers. The best papers in a certain field are more likely to be published in high-IF journals than not. Journal names carry a quality signal (call it reputation if you want).

          I fully agree with the latest comment by Steve.

    • Stephenemoss says:

      Stephen, you rightly say that “what gives the work of your group value is the trouble you take to ask important questions and to follow them through with artful and rigorous experiment. It’s importance is within the paper, not in the journal”.

      How true. In which case, why did Steve in the opening to his blog, decide to send his paper to one of the elite journals? I don’t agree that the reason is that those journals give your publication the stamp of ‘quality’ – as we’ve discussed there are many high quality publications in lesser journals, and no shortage of turkeys in the top journals. And lets not delude ourselves that it is the advent of JIFs that have led to the current hierarchical classification of journals. My first dozen or so papers came out at a time before JIFs, but we all knew which were the most prestigious journals, and we all wanted to publish in them.

      I believe the reason we persist in this, and the reason why Steve didn’t send his recent work to PLoS, is that we are innately competitive, and happily so, and that we enjoy the challenge of chasing what can be a particularly elusive goal. In this respect the scientist is probably much like the mountaineer who craves the most difficult peaks, or the athlete striving to a new world record. And just like the athlete or mountaineer, we know that success in our publication endeavours will result in the approbation of our peers – which in science means promotions, grant funding etc. So I would agree with what Steve calls ‘fame and glory’ as being the major motivator, and suggest that JIFs, or more noble or cerebral considerations are almost irrelevant.

  5. Steve Caplan says:

    In answer to Zen Faulkes:

    I wrote:
    “While you are absolutely correct that a journal’s name says nothing about an individual paper’s quality, it does say something about the statistical likelihood that a given paper in that journal is of quality.”

    To which you replied:
    Given that impact factor is more tightly correlated with retraction than citation, it’s not clear to me whether journal name in practice signals papers with high quality or low.

    Please take note that:
    1) I did not mention anything about ‘impact factor.’ I specifically commented that it’s not the IF, but the reputation of the journal for those in the field. Stephen Moss notes above that the selection and targeting of “high tier journals” by researchers for publication predates impact factors.
    2) Even if I had mentioned impact factor, “retraction factors” mean even lass than impact factors. After all, who would bother retracting a paper that is barely noticed?

    As for Google Scholar updates and database searches–that is not the issue that scientists in my field are forced to deal with. I am not opposed to a dialog about the system currently in place (as Stephen Curry suggests) and whether it is the optimal system. However, at present I think there is little argument that in my field (and many others) researchers will strive to publish in known and respected journals, so the vast majority of important advances in the field will be by such recognized journals. Such journals include, of course PLoS One (on which I serve as an academic editor). However, there are a large number of journals that I simply ignore because the overall quality of the average paper published makes it unlikely that I will be spending my time wisely reading such a journal.

  6. cromercrox says:

    Stephenemoss wrote

    My first dozen or so papers came out at a time before JIFs, but we all knew which were the most prestigious journals, and we all wanted to publish in them

    which was very much the point I wanted to make. People in any given field know which the ‘best’ journals are to publish in, irrespective of IFs. But what does ‘best’ mean? Most rigorous? Most suitable? Most fashionable?

    Journals wax and wane. New ones are published, old ones change or become extinct. A hundred years ago there was no Cell, but there was the Quarterly Journal of Microscopical Science, which published good work that’s still of value today. But the Q. J. either changed its spots, or went out of fashion, or both.

    There is a societal aspect. Some journals clearly have a bias towards certain styles of paper, perhaps dictated by the tastes of the editors or the professional editorial board. Cell under Ben Lewin had a definite style, as did Nature under Maddox or Science under Koshland. It could be said that the ‘personality’ of the editors contributed to the success of the journals. This is not to say that it was ‘fair’, but journals are like anything else – like newspapers, for example, each with its own slant and audience.

    Why should journals form a hierarchy at all? It is clear that the determining factor is selection, the criteria determined by professional editors, referees and so on. Advocates of transparency and open access would rather that there were no journals as such, just a single tier of papers all on the same level and considered purely on their merits. This is a laudable aim, but there’s an unspoken coda to this. The ‘merits’ of a paper aren’t plain and simple facts, or reproducibility, or anything that can be quantified, but also include such imponderables as style, fashion and so on.

    (Disclaimer – I am an editor at Nature)

    • Jim Woodgett says:

      Excellent point about pre-JIF reputations. As mentioned previously, the problem is not so much the journals themselves (although Nature trumpets its JIF at the same time as condoning the factor) but the abuse of JIFs by evaluators that determine career progression, etc. Nor do I think style is an important factor (although the layout designers will no doubt disagree) and some lowly journals look just fine (at least in layout). There is associative merit that many covet but there is also the effect of front line attention. A headline in the NY Times is picked up faster, propagated and thus seen by more people than if published in the North York Mirror.

      There is a growing list of useful curation and search tools that look beyond journal titles (F1000, Google Scholar updates, etc.) but there is personal reputation as an important factor. I have several autobot searches that cover aspects of my fields of interest. The ones that rely on keywords are the most inaccurate and profligate when it comes to quality. Those that give me a heads-up when a certain author has published are much more reliable (for me). You can imagine the relative numbers as well… Of course the latter approach misses new investigators, but they quickly get added. Maybe we need a trust metric?

      • cromercrox says:

        Just to clarify, I didn’t mean style to mean the physical look of a journal, but something more imponderable, more of an editorial style. We journal editors do our best to be as fair as we can, but we are only human. People seem to have a schizophrenic attitude to professional editors – either we are seen as omniscient, or we are seen as irrelevant. Yes, there are all sorts of IFs beyond the usual IF, including single-article metrics (which still count up the numbers, even if the article is cited only to be damned), and, yes, there is reputation. But if this isn’t measured by something like the h statistic, how else should it be measured? The people who judge the reputations of others are only human, too. So I could probably paraphrase Churchill and say the system we have is the worst possible, except for all the other ones.

        • Jim Woodgett says:

          Yes, you are both pilloried and lauded. We submitters are a fickle bunch. The “top tier” journals are usually looking for more than well performed science and, to be fair, it can be a lot easier writing a 4 figure paper for Nature than a 10 figure article for Mol Cell Biol. It’s about some weird combination of novelty, surprise, interest and timeliness. Like saying your Dad is the best in the world, it’s hard to be objective about our own sweat and tears.

          Reputations are initially harder to judge but more personal and often more reliable because of this. However, nothing is a panacea. I do think this is where social networking could have an impact. I must say, although I have met few of them, I have a lot of respect for a number of science bloggers. Understanding peoples perspective as well as their track record helps set some standards. Of course, people can be great scientists yet hold views that you might not agree with but that is fine. If we all agreed about everything it would be very boring!

          • Pep Pàmies says:

            I could not agree more with you both.

            Indeed, professional editors exist in great part because of the inherent authors’ bias on the importance of – to paraphrase Jim – their sweat and tears. A pair of cool, sufficiently knowledgeable yet detached eyes turns out to be a workable antidote.

  7. …”Why not merely submit the work to an open-access journal that accepts solid and controlled science”…”open access vs high tier”…

    You are making the often-repeated mistake of conflating a publishing model (open access) with journal impact factor. These are two distinct concepts. There are many high-impact open access journals (e.g. PLoS Biology) and very many low-impact closed-access journals. In fact, something like 80% of all STEM journals — the vast majority of which are closed access — have a lower impact factor than the open access journal PLoS ONE. Repeating this misconception is one of the ways in which senior researchers prevent younger researchers from having a choice to catalyze the necessary transition to an open access model for publishing that will happen in the near future.

  8. Stephenemoss says:

    Bjorn – in your comment you raise the important question of ‘need’. Steve C says he does not need impact factors to decide which journal to submit his work to, and as a publishing scientist I completely agree with him. We learn early on in our careers which are the ‘best’ journals, and of course we aim to publish our best work in those journals – so for many scientists impact factor (or other metrics) is irrelevant and unnecessary. So if scientists don’t need such metrics, why are we engaged in these discussions? The reason is that managers and sometimes powerful individuals in certain Universities (and perhaps funding organisations too) who don’t know any better, are basing decisions of promotion, salary, and even redundancy on these deeply flawed tools. I have argued on Steve Curry’s recent megablog that science would work better without metrics, and I am yet to be convinced that searching for more or better ways of metricising journals serves any useful purpose.

  9. Steve Caplan says:

    Bjorn-

    “To be fair, I shared your exact sentiment until I saw the data. People have been searching for evidence of this subjective notion for decades, but the data always indicated otherwise.”

    The problem is that the “data” that you refer to.

    As Steve Moss indicated, and as I clearly stated earlier, it’s not the impact numbers themselves that are important, but the quality of the journal in a given field that commands respect from researchers. One of the very first things graduate students learn in ‘Journal Club’ (a meeting where a paper is analyzed for quality and its overall advance of science) is how to judge the value of a given paper. By the time a student graduates, he/she should be fully trained in discerning scientific value of papers.

    As for Nobel prize winners (and you may argue that this is a subjective award as well–but assuming you agree that there is merit behind the selection): a cursory glance at 4-5 researchers in MY field clearly show that the vast majority of papers published fall into the 2 top categories of journals in my field (as opposed to the bottom 2 that I laid out earlier in my response). Does this mean they are all in the top 1% of IF journals? No. That’s because, as I indicated, those well-respected journals in my field can fall anywhere on the IF from 4 or 5 up to the highest IF, whatever that is. But there is a clear distinction between these journals and the lower echelon ones.

    I don’t really know what statistical ways you can use to test this fairly. Perhaps it is unnecessary, because as Steve Moss notes, it’s the respectability of the journal irrespective of the IF. Overall I agree with that. It’s also possible that in other fields (chemistry, physics, mathematics and so on) that things work differently, so that statistics “across the board” won’t be meaningful. In the realms of cell biology and biochemistry, researchers are very competitive and strive to have their work accepted by journals with good reputations–reputations that are not necessarily correlated with yearly IF fluctuations. So the search for a “unifying scientific test”- a ‘quickie’ that will uncover the value of a journal–without spending years in the field reading, reviewing and writing papers–may well be impossible.

  10. sorry to be late to this
    but high impact factor journals are also (as I see some of you mention above) what our employers want us to publish in. If you can get a paper in Nature in my field it’s a really good thing. You can be as skeptical or have the low citations or whatever – but I can’t see how it wouldn’t be thought of as anything BUT a good thing. I think the drive sometimes to publish in the big-boy (or girl) journals is the attention it gets you, even if this is only internally. I am an early career researcher and if I could get a Nature paper with me being the corresponding author I would be really happy and really pleased and I can’t help but thinking it would really really really help my career. This is one of the biggest problems I have when discussing this topic. Is while I am quite cynical about IF and open access in general, as a PI I have one paper in what I would consider a really high impact journal., I can’t get away from the fact that the big sexy journals still have so much prestige – why wouldn’t we keep *trying* to publish there. I am not saying I *need* this but I think it would help – unless my paper was totally crap. But how do we get around this. Most academics on paper know how hard it is to get a high impact paper and think IF and all that are a bit silly but we still have the pressure to publish there.
    How do we get around this? I think blogs (especially Stephen Curry’s stuff on IF and how it is weird) help – but do they help institutionally ?

    Apologies for rambling

Comments are closed.