Evaluating scientists: take care

A resonant blogpost is the gift that keeps on giving. One of the latest comments in my Sick of Impact Factors polemic bemoaning the corrosive effects of journal impact factors on scientific lives provided a link to a quite wonderful paper.

I missed Alexis Verger’s comment when it dropped into my blog on August 31st. By the time I had caught up with it, a colleague at work had also emailed me the paper — ‘Evaluating how we evaluate‘ by Ronald Vale (PDF)* — suggesting it might be of interest.

Evaluating how we evaluate

It certainly was.

Vale’s paper provides a lucid and considered examination of the measures that we in the scientific community use to assess one another. He makes it clear that on too many counts we are doing a poor job of evaluation.

The first section, on the pernicious effects of journal impact factors, rang many bells with me of course, but Vale goes beyond this single issue to look at how numbers of publications, lab sizes, scientific collaborations and contributions to training and education are all too often judged numerically, and therefore superficially.

Please read it. Every working scientist should. Vale reaches deep into his subject and touches on problems that many sense but feel unable to resolve, in particular the unbalanced demands of research and teaching and the difficulty of sustaining a scientific career. From my reading it was clear that Vale’s anodyne title and dispassionate style belie strongly held convictions. I can’t do justice to his paper with a summary and in any case prefer not to delay your own reading but, as a taster, here he is on community and education:

“While scholarly achievement and grants sustain the core mission of research institutions, education and community service also are important and creative endeavors; they contribute immensely to the culture of an institution and the future of our profession. These efforts should be respected and deserve more than lip service during a review for academic promotion. Academic evaluation predicated too narrowly on papers and impact factors steers young scientists away from educational/community activities if these activities contribute only minimally to their overall evaluation. This sends the wrong message to young scientists, especially at a stage when many desire both to be altruistic and to advance their careers.”

Vale, finally, is in no doubt where the responsibility lies and I couldn’t agree more.

“As stewards of our profession, academic scientists have a collective responsibility to consider how to disseminate knowledge through publications and how to advance graduate students to postdocs, postdocs to assistant professors, and assistant professors to tenure and beyond. These processes are not out of our hands, predetermined, or immutable.”

 

*Update (9-9-2012): Thanks to Alexis Verger for pointing out (see comment below) the link to the PDF of the paper at Vale’s web-site. You will also find there other examples of his opinion pieces.

This entry was posted in Scientific Life and tagged , . Bookmark the permalink.

15 Responses to Evaluating scientists: take care

  1. alexis verger says:

    Apparently you can access the full text directly via Ronald Vale web site : http://valelab.ucsf.edu/external/editorials.html

  2. Jim Woodgett says:

    It’s important to understand why there has lately been this greater emphasis on metrics. They are not all bad and can be very useful – but primarily as longitudinal comparators. Instead, they are often being abused as a justification for some relatively arbitrary action or other (e.g. promotion or denial thereof). Science has done pretty well so far in adjudicating and nurturing its community without such tools but it’s an infectious disease. Job interviews now dictate standard questions and formats. Deviate and it’s “unfair”. Yes, but can’t I ask why they answered that question in that way?

    The pressures to perform and generate acceptable metrics are also inducing very undesirable behaviours. More trainees are tempted to shortcut their experiments, more researchers are tempted to juice their data and more funders are tempted to streamline their review processes. The end seemingly justifies the means (I’ll get a job!) and we’ll be left with cookie-cutter researchers who think and work alike and possibly reduced scruples and rigour.

  3. Jerome says:

    I agree it is up to scientists themselves. But as an early-career researcher, I put the majority of the blame and responsibility on the shoulders of the tenured. This, after all, is why tenure exists.

    Unfortunately, getting tenure means you are pretty wedded to them that brought you – impact factors and publication counts.

    • Stephen says:

      Indeed. The tricky thing is that the current crop of senior researchers (or at least those of my vintage — very late 40s) have played the game and won. For some the attitude may be ‘it worked for me, why not for others’ but I think that needs to be challenged.

      As is clear from Vale’s piece, he lays the blame for the current situation squarely at the feet of the scientific community. I like that he encourages us to embrace the notion of ‘stewardship’ – as outlined in the paragraph that I quoted.

  4. Yes, an excellent article, Stephen. I’d also point people to Peter Lawrence’s eloquent and powerful expositions on these issues and problems, e.g.

    The politics of publication (2003)
    http://www.mrc-lmb.cam.ac.uk/PAL/pdf/politics.pdf

    The mismeasurement of science (2007)
    http://www.mrc-lmb.cam.ac.uk/PAL/pdf/mism_science.pdf

    The heart of research is sick (2011)
    http://www.labtimes.org/labtimes/issues/lt2011/lt02/lt_2011_02_24_31.pdf

    He ends the 2003 article with:

    “It is we older, well-established scientists who have to act to change things. We should make these points on committees for grants and jobs, and should not be so desperate to push our papers into the leading journals. We cannot expect younger scientists to endanger their future by making sacrifices for the common good, at least not before we do. “

    • Stephen says:

      Indeed yes – Vale is ploughing a very similar furrow here to one that has long been worked by Lawrence.

      I’ve referred to Lawrence’s critiques of the current status of research management in earlier posts (including in ‘Sick of Impact Factors‘) but it’s good to have the links collected here. Warmly recommend them all to anyone who’s not come across Lawrence before.

    • What about us older, embarrassingly UN-successful scientists….?

      But then, nobody listens to us, either.

      • Stephen says:

        I’d like to think (but it may be wishful), that Vale’s piece might help to redress the balance — or should I say imbalance — of shifts in funding policies of the Wellcome Trust and the views of people like Paul Nurse. Vale himself is a highly successful scientist, awarded the prestigious Lasker prize only yesterday (!). So perhaps he will start to dine at higher tables and be able to promulgate his views.

      • Grant says:

        (Excuse this for being slightly off-topic.)

        I’ve once written taking my lead from Lawrence but on another topic (authorship credits) back in 2002 or thereabouts. Lawrence spoke out about several things, and eloquently, but I have to admit I best remember his remark about having to write his first grant application after 40 (?) years in scientific research!

        Good to see the links gathered too. Must try find time to read them – I suspect I’ve missed one or two of those listed.

  5. Steve Caplan says:

    Stephen,

    While I still think there is significance in having a hierarchy of journals, I agree with everything else you say, including the invalid use of IF for evaluating career progression. Just a note that while scientists strive for new ways of quantifying and measuring EVERYTHING in so-called objective manners, I see a discouraging trend of using poor metrics for a wide variety of issues that cannot be simply quantified. As a cell biologist, there is a growing and generally positive trend to measure and quantify microscopic events, rather than depict representative images. Overall this is the right way to go. But any cell biologist knows that there are many situations of subtle changes within the cell that simply cannot be measured in pixels and graphed–that does not, of course make their existence insignificant.

    • Stephen says:

      I appreciate that you have felt able to put the counter-view on JIFs here, Steve. And that, though we may not see eye to eye on that particular issue, there is clearly much else on which we agree.

  6. DrugMonkey says:

    “representative images” are not science, full stop, end of story, Katie bar the door.

    Central tendency, indication of variability. Period.

    Everything else is a pilot study that helps you do the real one.

  7. Very glad to read the section on community and education. I do a lot of outreach and education, and definitely feel that it’s valued by everyone except employers (universities, labs, institutes, observatories). And it’s certainly not taken into account during the job selection process!

    I had actually just written a blog piece on this subject just before coming across this post and Vale’s article. Definitely required reading (Vale’s article I mean, although you’re welcome to read my thought on the matter too!)

Comments are closed.