It’s Time to Resist the Pressure

As part of my university’s preparation for the transition to open access, there is a project being run out of the University Library to look at how academics approach publishing. (Fellow OT blogger Stephen Curry has written much on the topic of open access if you’ve somehow not caught up with the topic.) As part of that project I was interviewed on my publishing ‘strategy’. In so far as I have a conscious one, I realise how much the way I have been wont to approach writing papers may be at odds with the sort of pressures under which someone like the early career Sylvia McLain (another of the OT stable of bloggers) is clearly reeling. That, I suspect, is because I was introduced to paper-writing in an earlier, gentler era and I have not felt it necessary (at least to date) to reconsider my methods.

I do think there are substantial differences in approach between disciplines, possibly even between sub-disciplines. My interactions with biologists suggest they are far more likely than physicists to want to hold back any publication until they can conceive of sending it to a journal such as Cell or Science, rather than writing up a nice piece of work sooner rather than later and sending it to some decent (if not stellar) journal as soon as there is a good story to tell. Even so, it seems that biologists regularly expect the paper to come back from the editor with a request for further experiments. That was unheard of in physics in the past – or at least I’ve never personally known it to happen as an author, referee or editor. Referees, in my experience, are keen to ensure there is clarity, that the conclusions follow from the results and to check there are no gaping holes or flaws. But if there are the paper is rejected, not sent back for further experiments to be done. If that were done in my field I believe it would be regarded as a new submission.

Another relatively new tendency, or at least it has got much more common, is to argue with the editor. Editors seem increasingly prone to take one negative report as dominant over, say, two other positive ones and recommend rejection outright rather than give the author a chance to respond. That also seems a change, but one that simply encourages argument.  I suspect that this is simply a manifestation of editors being busy people and taking the easy way out; then they wait to see how angry this makes the authors.  Those that argue may well win, but if you are junior/insecure/less assertive maybe you’ll just go away with your tail between your legs. I suspect editors used to be more willing, or possibly have more time, to carry out their own personal arbitration between referees’ responses and this situation was less likely to occur.

Which takes me on to impact factors. I started this post before Stephen Curry’s most recent post. Impact factors have featured largely on his blog last summer and again today (do read this latest post of his and join his push-back).  They are clearly pernicious and again, I have been able to indulge in the luxury of not caring. Initially, because such a number did not exist; more recently because I am not seeking tenure, promotion or anything similar. I would like to continue to indulge in that luxury, but I now realise I may be doing my students a disservice by taking that attitude. Or, on the other hand, I may be teaching them the value of caring for the science itself, and not some spurious metric that seems to have been originally dreamed up in order to amplify a journal’s prestige (and hence the number of library subscriptions). A past editorial  from PNAS (which recently resurfaced through Twitter) pointed out

‘too many of our postdocs believe that getting a paper into a prestigious journal is more important to their career than doing the science itself’

– clearly an unhealthy state of affairs. So it is up to us, particularly the more senior, not only to push RCUK to state categorically that impact factors will not be used in the assessment of researchers in the future, as Stephen urges and as is the stated case in the current REF guidelines,but also to reinforce that position any time we find ourselves in a position of ‘judging’ others: appointments, promotions and so on. We should not use impact factors as a spurious proxy for judging the science itself.

The consequences of actions around publishing are full of such spurious metrics, spurious because they imply a precision that is sadly lacking. Impact factors are just one such; h factors are another. Even the order in which authors’ names appear on papers can be a minefield. These are all of dubious value because, in so many evaluation committees – wherever they may appear but certainly including the REF, appointments and promotions committees – the evaluators are too often lumping together individuals from different areas whose culture may be significantly different. So, one field (say synthetic chemistry) may publish a huge number of short letters describing innovative methodologies to produce 0.1mg of a new substance which one day might turn out to be the new next best thing. Another may expect lengthy papers which could only be produced by a huge team working for several years (perhaps because a new transgenic organism needs to be produced for their experiments and several generations of this are required). Comparing sheer number of publications is meaningless and I would contend so would be the use of the h factor in a simplistic way. Yet I sit in committees and hear people solemnly say that X has an h factor of only 4 but Y has an h factor of 27; for a serious pure mathematician I am told an h factor of 4 is not necessarily unusual, whereas it would be extraordinary in synthetic chemistry. Too often the conversation ignores these significant differences, which can be appreciable even within different domains within a single subject.

I have frequently argued against the slavish use of these crude measures at those committees I attend, although of course a quick look may sometimes be in order to confirm an otherwise subjective evaluation. I urge all of you who likewise participate in committees within or beyond your institutions to take a similar robust view. I get fed up with my physicist colleagues who have, for many years and long before impact factors raised their ugly heads, stated so-and-so cannot be a serious applicant for a job/promotion or whatever if they’ve never published a PRL (Physics Review Letters for the uninitiated). I have frequently told them this is silly. I don’t believe this is just sour grapes because I don’t have any papers in the journal to my name; it is because PRL follows fashions and requires a certain style of physics which doesn’t fit everyone’s research. I can say in all honesty I read many submissions to the last RAE where people obviously believed a PRL would, simply by virtue of it appearing in that journal, score highly and who didn’t stop to think how boring or incremental their papers were. I was duly bored not infrequently and the papers did not necessarily get the score their authors probably naively anticipated.

As I say, I was initiated into publishing in a very different world, where metrics weren’t visible, where everything existed only in hard copy and transmission of information occurred through the old-fashioned post or by sitting long hours in the library pouring over a dusty tome; a Xerox machine was about as hi-tech as it got. Sounding like an old fogey I have to say that this lack of instantaneous information, whilst it certainly didn’t facilitate research, collaboration or the gleaning of new knowledge rapidly, did mean (I suspect) that one judged a paper by its content, not the number of citations or the impact factor of the journal in which it appeared.

On the very same day I was interviewed about my publishing strategies, I also attended a dinner discussion, led by Philip Campbell, Nature’s Editor-in-Chief, about scientific misconduct. As with open access, much has been written (and recommended) about this subject, but there is no doubt in my mind – or in Philip Campbell’s – that the high stakes involved in publishing in the ‘right’ journal (of which his own is likely to be seen as one) can lead to sloppiness and far worse in the behaviour of authors. Once again, the metrics that have insinuated their way into the competitive academic environment encourage sharp practice in the over-anxious. Of course, the possibility of dodgy goings-on have always been there and are perhaps most obvious on a regular basis in the plagiarism that besets essays and other types of submitted student coursework. But, high profile cases like Hendrik Schön are in a different league because the fabrication of data can be so extensive. Was that due to pressure within Bell Labs? The commentators seemed to imply as much.

Senior academics should be doing all they can to resist the tendencies they encounter, wherever that may be, to use dubious quantitative data as a simplistic measuring tool. If we don’t, we are guilty of facilitating a culture which will be detrimental to the future of research and the individuals sucked into the whirlpool of impact factors, citations and claims of ‘my h factor is bigger than yours’.  We haven’t yet quite reached the position where we are solely judged on such dodgy numbers. Let’s make sure we never do.

 

 

This entry was posted in Research, Science Culture and tagged , , , . Bookmark the permalink.

16 Responses to It’s Time to Resist the Pressure

  1. Thank you for this excellent post. I’m currently at that transition point of becoming a senior scientist and everything you’ve said rings true. It’s now up to me to resist the very system that elevated me to where I am now. I benefited from playing the impact factor game, pushing for the highest journals, focusing on the proxies of scientific quality rather than true quality. There’s an odd sense of conflict in turning against one’s ‘parent’ but there is no question that we need deep reform.

    Three thoughts after reading your post.

    1) If we don’t rely on metrics of some sort to assess the quality/impact of science then we run into a situation where specialists are the only people who can assess science. Perhaps that’s how it should be, but since this often isn’t the case on REF panels (or grant panels, or any kind of academic panels) then what is the solution? How can the inevitable army of non-specialists judge the quality of my work without referring to a statistic?

    2) The problem of metrics doesn’t disappear as a PI. I’m now finding that although it isn’t as important for me to ‘play the game’ like I used to, I now need to be thinking about my PhD students and post-docs. On the one hand I should be teaching them to resist the pressure and become true scientists who value truth over narrative. On the other hand, I find myself asking: why should they be sacrificed on the altar of my ideals, launched from the privileged position of my own job security? By forcing them to do this, won’t I be dealing them out of the very game I want them to win if they are to generate real change in the future? When I recently moved my lab to an open science model, I felt this conflict quite deeply.

    3) We need to do more than simply attack impact factors, though doing so is of course laudable. But I believe we need to fight the culture of journals itself. The value we place on impact factors is a proxy of a deeper obsession with journal prestige. We must work to eliminate the system in which the value of an individual work of science is judged by the venue in which it is published. And the only sure way to do that is to eliminate the hierarchy of ‘venues’. In my opinion, the most helpful initiative that you and other eminent scientists can push for is the simple eradication of academic journals. It may sound like a grandiose aim but Bjorn Brembs (for one) has offered an elegant solution in which university libraries publish science instead of journals: http://bjoern.brembs.net/comment-n835.html

    • Mike says:

      Very well put, Athene. I’ve just left a couple of comments on Stephen’s post that are probably just as relevant here.

      Chris, I think you raise 3 interesting points, that I’d like to discuss further:

      (1) What is the point of asking non-experts to assess the quality of work from outside their field? If they rely (partially/wholly) on flawed statistics, their judgement will be (partially/wholly) flawed. Should the research community accept this? Why do we agree to participate in this (flawed) assessment system?

      (2) You can get round these ethical difficulties by clearly stating your publishing philosophy when recruiting to your lab. If students/post-docs are willing to sacrifice style for substance, you will also be ingraining the next generation with a more appropriate sense of how to do and publish good science in the future.

      (3) There is no inherent problem with IF or any other metric per se. It’s the misuse of these metrics that is problematic. Likewise, I don’t see a major problem with the concept of publishing in journals per se (arguments about profiteering, access etc aside). In fact, it may be preferable to maintain some independence from Universities or other Research Institutions in the publication process to avoid further pressure to publish work of questionable quality.

  2. Thanks for these comments Athene that quite clearly outline the issues surrounding current publication metrics. However, I have this comment. I read a lot about the dangers and even stupidity of judging science by these metrics in the press yet at the same time, within the closed confines of academia, hear how publication count, citation numbers, IFs etc are so important. I think I am correct in stating that the guidelines say IFs won’t be considered in REF submissions but I fear there are probably many university departments across the country that are ranking their staff’s papers in such a way.

    Another quick comment/question- do you think the move for open access (morally laudable as it might be) will lead to another publication metric, i.e. number of views/downloads?

    • Firstly, part of the point of this post was to encourage those of us who sit on committees in what you describe as the ‘closed confines’ of academia to push back against blind use of metrics. Of course they aren’t all bad, as Mike says in his point 3) above, but they should be used with caution and a light touch. I think one of the factors that may stop the creation of using views/downloads as a metric will be there will be multiple places (ie journal plus institution repository) to get this data from and it isn’t necessarily obvious how easy it will be to extract appropriate numbers. No doubt people will try, and possibly it is as relevant as citations.

      • Sorry Athene, the “closed confines” remark was not meant to come across as overly negative of the academic community. I am just concerned that the public comments from prominent scientists such as yourself against IFs does not always reflect what I’ve heard within those confines, although I appreciate different disciplines approach these metrics in different ways. Good luck with the campaign and I hope it helps change the tide.

  3. I’ve been banging on about this at least since 2003 when, at the invitation of the editor, I wrote (in Nature) The Tyranny of Impact Factors.

    Since then matters have only become worse. Thank heavens there is, at last, a revolution growing against this nonsense. Everyone, especially senior academics, should read Peter Lawrence’s essay on The Mismeasurement of Science. That explains beautifully what has gone wrong.

  4. JJBP says:

    I have started to write papers for publication in biological journals as part of a collaboration, and I too have noticed the predilection of some referees in this field to ask for extra experiments (and indeed to want to see the resulting lab data). This does become problematic when the lab scientist has graduated or moved on, and you have to get someone else to do the work. It can also distort the intention/rationale of the paper or it leaves one looking for a way to tactfully say “if we’d done all that, this would have gone to a better journal”

    • I remember the 2nd paper I had published was bounced back twice from the reviewers with suggestions of more experiments (but not rejected by the editor). Fortunately, there was no issue with equipment access or researchers leaving that complicated issues but the whole process took about 6 months. It was frustrating at the time but in the end it made the paper much stronger, although the actual core results and conclusions didn’t change after all the iterations. I don’t necessarily think its unreasonable to ask the authors to make further measurements if there are clear gaps in the paper, whilst not rejecting it outright. Surely that’s what a peer review process should do? On the flipside, if a reviewer asks for a new line of measurements to be made, especially using a piece of kit or a technique possibly not available to the authors, that would seem to be unreasonable.

  5. Just to follow up, I can confirm the issue about impact factors etc being poor and distorting measures likely to introduce perverse incentives was aired at ERC Scientific Council today in our discussions including with a member of the Commissioner’s team. At the ERC, where excellence has always been the overriding concern and criterion, the view of the dangers of simplistic metrics is certainly well appreciated.

Comments are closed.