As part of my university’s preparation for the transition to open access, there is a project being run out of the University Library to look at how academics approach publishing. (Fellow OT blogger Stephen Curry has written much on the topic of open access if you’ve somehow not caught up with the topic.) As part of that project I was interviewed on my publishing ‘strategy’. In so far as I have a conscious one, I realise how much the way I have been wont to approach writing papers may be at odds with the sort of pressures under which someone like the early career Sylvia McLain (another of the OT stable of bloggers) is clearly reeling. That, I suspect, is because I was introduced to paper-writing in an earlier, gentler era and I have not felt it necessary (at least to date) to reconsider my methods.
I do think there are substantial differences in approach between disciplines, possibly even between sub-disciplines. My interactions with biologists suggest they are far more likely than physicists to want to hold back any publication until they can conceive of sending it to a journal such as Cell or Science, rather than writing up a nice piece of work sooner rather than later and sending it to some decent (if not stellar) journal as soon as there is a good story to tell. Even so, it seems that biologists regularly expect the paper to come back from the editor with a request for further experiments. That was unheard of in physics in the past – or at least I’ve never personally known it to happen as an author, referee or editor. Referees, in my experience, are keen to ensure there is clarity, that the conclusions follow from the results and to check there are no gaping holes or flaws. But if there are the paper is rejected, not sent back for further experiments to be done. If that were done in my field I believe it would be regarded as a new submission.
Another relatively new tendency, or at least it has got much more common, is to argue with the editor. Editors seem increasingly prone to take one negative report as dominant over, say, two other positive ones and recommend rejection outright rather than give the author a chance to respond. That also seems a change, but one that simply encourages argument. I suspect that this is simply a manifestation of editors being busy people and taking the easy way out; then they wait to see how angry this makes the authors. Those that argue may well win, but if you are junior/insecure/less assertive maybe you’ll just go away with your tail between your legs. I suspect editors used to be more willing, or possibly have more time, to carry out their own personal arbitration between referees’ responses and this situation was less likely to occur.
Which takes me on to impact factors. I started this post before Stephen Curry’s most recent post. Impact factors have featured largely on his blog last summer and again today (do read this latest post of his and join his push-back). They are clearly pernicious and again, I have been able to indulge in the luxury of not caring. Initially, because such a number did not exist; more recently because I am not seeking tenure, promotion or anything similar. I would like to continue to indulge in that luxury, but I now realise I may be doing my students a disservice by taking that attitude. Or, on the other hand, I may be teaching them the value of caring for the science itself, and not some spurious metric that seems to have been originally dreamed up in order to amplify a journal’s prestige (and hence the number of library subscriptions). A past editorial from PNAS (which recently resurfaced through Twitter) pointed out
‘too many of our postdocs believe that getting a paper into a prestigious journal is more important to their career than doing the science itself’
– clearly an unhealthy state of affairs. So it is up to us, particularly the more senior, not only to push RCUK to state categorically that impact factors will not be used in the assessment of researchers in the future, as Stephen urges and as is the stated case in the current REF guidelines,but also to reinforce that position any time we find ourselves in a position of ‘judging’ others: appointments, promotions and so on. We should not use impact factors as a spurious proxy for judging the science itself.
The consequences of actions around publishing are full of such spurious metrics, spurious because they imply a precision that is sadly lacking. Impact factors are just one such; h factors are another. Even the order in which authors’ names appear on papers can be a minefield. These are all of dubious value because, in so many evaluation committees – wherever they may appear but certainly including the REF, appointments and promotions committees – the evaluators are too often lumping together individuals from different areas whose culture may be significantly different. So, one field (say synthetic chemistry) may publish a huge number of short letters describing innovative methodologies to produce 0.1mg of a new substance which one day might turn out to be the new next best thing. Another may expect lengthy papers which could only be produced by a huge team working for several years (perhaps because a new transgenic organism needs to be produced for their experiments and several generations of this are required). Comparing sheer number of publications is meaningless and I would contend so would be the use of the h factor in a simplistic way. Yet I sit in committees and hear people solemnly say that X has an h factor of only 4 but Y has an h factor of 27; for a serious pure mathematician I am told an h factor of 4 is not necessarily unusual, whereas it would be extraordinary in synthetic chemistry. Too often the conversation ignores these significant differences, which can be appreciable even within different domains within a single subject.
I have frequently argued against the slavish use of these crude measures at those committees I attend, although of course a quick look may sometimes be in order to confirm an otherwise subjective evaluation. I urge all of you who likewise participate in committees within or beyond your institutions to take a similar robust view. I get fed up with my physicist colleagues who have, for many years and long before impact factors raised their ugly heads, stated so-and-so cannot be a serious applicant for a job/promotion or whatever if they’ve never published a PRL (Physics Review Letters for the uninitiated). I have frequently told them this is silly. I don’t believe this is just sour grapes because I don’t have any papers in the journal to my name; it is because PRL follows fashions and requires a certain style of physics which doesn’t fit everyone’s research. I can say in all honesty I read many submissions to the last RAE where people obviously believed a PRL would, simply by virtue of it appearing in that journal, score highly and who didn’t stop to think how boring or incremental their papers were. I was duly bored not infrequently and the papers did not necessarily get the score their authors probably naively anticipated.
As I say, I was initiated into publishing in a very different world, where metrics weren’t visible, where everything existed only in hard copy and transmission of information occurred through the old-fashioned post or by sitting long hours in the library pouring over a dusty tome; a Xerox machine was about as hi-tech as it got. Sounding like an old fogey I have to say that this lack of instantaneous information, whilst it certainly didn’t facilitate research, collaboration or the gleaning of new knowledge rapidly, did mean (I suspect) that one judged a paper by its content, not the number of citations or the impact factor of the journal in which it appeared.
On the very same day I was interviewed about my publishing strategies, I also attended a dinner discussion, led by Philip Campbell, Nature’s Editor-in-Chief, about scientific misconduct. As with open access, much has been written (and recommended) about this subject, but there is no doubt in my mind – or in Philip Campbell’s – that the high stakes involved in publishing in the ‘right’ journal (of which his own is likely to be seen as one) can lead to sloppiness and far worse in the behaviour of authors. Once again, the metrics that have insinuated their way into the competitive academic environment encourage sharp practice in the over-anxious. Of course, the possibility of dodgy goings-on have always been there and are perhaps most obvious on a regular basis in the plagiarism that besets essays and other types of submitted student coursework. But, high profile cases like Hendrik Schön are in a different league because the fabrication of data can be so extensive. Was that due to pressure within Bell Labs? The commentators seemed to imply as much.
Senior academics should be doing all they can to resist the tendencies they encounter, wherever that may be, to use dubious quantitative data as a simplistic measuring tool. If we don’t, we are guilty of facilitating a culture which will be detrimental to the future of research and the individuals sucked into the whirlpool of impact factors, citations and claims of ‘my h factor is bigger than yours’. We haven’t yet quite reached the position where we are solely judged on such dodgy numbers. Let’s make sure we never do.