Plagued with an unbelievably busy schedule, I have been a mostly passive follower of the excellent dialog that has resulted from several outstanding blogs on the peer review system, many of them “high impact blogs” by my esteemed colleague, Dr. Stephen Curry.
Just this week, after an extremely concerted and exciting process, my laboratory has submitted a manuscript spearheaded by a senior graduate student to one of those “high impact journals.” Now, granted, we are currently in limbo, and the manuscript may or may not even be sent out for review, but this of course raises the question as to why we chose to do so. Why not merely submit the work to an open-access journal that accepts solid and controlled science?
One might argue that it’s all vanity; the fame and the glory of having one’s name affiliated with high tier/highly respected journals is a major motivator. I won’t deny that this isn’t a part of it–there is an element of competition that clearly serves as part of the driving force. But is it more than that?
I would argue that it is. As I have briefly alluded in the comments to one of Stephen Curry’s recent blogs, it’s not the impact factor per se. In my field, there truly is a major difference in papers that are published in journals of differing tiers. For example, at the bottom of the scale, in some of the lowest ranking (unheard of) journals, the quality of the science is suspect, and its not always clear whether a researcher can see an abstract on the PubMed and actually believe the conclusion(s) or repeat them in her/his own lab. Occasionally outstanding papers can be found in such journals. Frequently the papers are a mixture of controlled and uncontrolled experiments, making it difficult to sort the wheat from the chaff.
A cut above these journals are ones where papers are published that contain absolutely rock-solid experiments. There is no concern as to the ‘repeatability’ or accuracy of the work done. On the other hand, the researchers who have done the experiments have not necessarily chosen a sequence of experiments that sheds a lot of information or new light on a problem. Sometimes the researchers are skirting the difficult questions, which are harder to answer. Other times, they are propelled in a certain (and not necessarily beneficial) direction out of inertia, or their ability to technically carry out certain experiments. For these reasons, the papers in such journals (generally speaking, of course), while depicting scientific experiments that are accurate, don’t necessarily provide a lot of helpful information to scientists in the field. They should be published, because that information may be useful to other researchers, but their publication in a journal of less repute than the higher-end journals marks them as reading that might be less essential. This is especially true in an age where scientists find it difficult to keep up, even within the narrow confines of their own fields.
At the other end of the spectrum are the most respected journals–those that showcase rock-solid experiments, but usually in the context of a model that sheds new light on a process or mechanism. Such papers are a must for those in the field to read, and allow other researchers to leapfrog forward and move beyond our current understanding of the science.
To be fair, I think that there are actually two levels of such journals: 1) journals that will accept such papers irrespective of their perceived ‘impact;’ and 2) journals that accept such papers only if convinced that they have potentially ‘high impact for a broad reading audience.‘
In my laboratory, I insist that my co-workers aim for these latter two journals, whose names are familiar to all of us in the field. I freely admit that acceptance into the latter style of journal can be extremely arbitrary, and in many cases can depend on ‘professional editors’ and whether they deem the findings of broad enough significance for their reader audiences. This, of course, is laughable; just look at some of the titles in these top-tier journals: “The dephosphorylation of serine 653 and 497 of protein XXX leads to its nuclear retention and deactivation of transcription factor YYY in a GTP-dependent manner.” Now that’s a made up title, merely intended to illustrate that with today’s level of scientific speciality, nobody outside the field will read it. They may read a “News and Views” style explanation, but certainly will not have the time to read the paper itself.
So why do we even bother aiming for these very high caliber journals? That’s a better question: distinguishing between the high and very high journals, as opposed to open access vs high tier. Here the answer lies in the system–one that it’s not possible to fight alone. As much as I have staked my career on excellent journals (but not the ultra-high tier ones) for the most part, the ultimate respect–translated practically into grant funding, etc.–comes from also having a few of the “type-2” high tier journal publications. This is despite my personal view that the average paper published in these journals is not necessarily any better than those in the journals we more frequently publish in.
So we await judgment in limbo, for now.