This post is a transcript of my opening remarks at the a Great Debate held earlier today at the European Geosciences Union 2019 meeting in Vienna. The debate asked us to consider the question: What value should we place on contributions that cannot be easily measured?
Update (13/04/2019): A video of the whole debate is now available online. My opening remarks start at 20:43 but if you have time I would recommend listening to the whole session.
As scientists, measurement is what we do. It is how we have built our disciplines and won the admiration and respect of our peers and the public for the many wondrous ways in which we have illuminated the world.
It is only natural therefore that we would seek to turn our rulers and compasses on ourselves.
But while numbers have done so much to help us understand the natural world, they are far more difficult to apply to the more complex world of human affairs, science included.
So we are here today to debate this question because of the stresses and strains that an over-enthusiastic and ill-considered application of numbers – or metrics – in research evaluation has brought to the academy.
Now… when trying to figure my way out of a problematic situation, I like to think about death. And I would encourage you all do do the same.
This quotation is number 1 of the top five regrets of people who have reached the end of their lives (according to a book by former nurse Bronnie Ware). I suspect it resonates, perhaps a little uncomfortably, with many of us in this room.
The difficulty we face every day is how to be true to ourselves. How do we cling to our ideals and highest aspirations in a world that seems daily to distract us from them? This problem is perhaps particularly acute for academics because we need to forge a reputation for ourselves if we are to succeed in our careers.
And the problem with that, according to Thomas Paine, is that a reputation is what others think of us – how they evaluate us. It does not seem to be enough to be true to oneself. We have to convince others or our scientific worth.
And the problem with that, is that our system of research evaluation as a whole seems to have made us prisoners of numbers.
We have been captured by citation counts, JIFs, and h-indices. And while these indicators are not entirely devoid of information that might be of some utility in research evaluation, they have taken over to an extent that is dangerous to the health of scientists and to the health of science.
In brief we can list the problems with metrics:
- reduced productivity (as people chase JIFs in rounds of submission and resubmission)
- displacement of attention from other important academic activities
- focus on the individual that undervalues the role of teams
- positive bias in the literature – the reproducibility (or reliability) crisis
- focus on academic rather than real world impacts
- hyper-competition that preserves the status quo (what chance diversity?)
As Rutger Bregman writes: “Governing by numbers is the last resort of a country that no longer knows what it wants, a country with no vision of utopia.” We might say the same of academia.
Now it’s easy to be critical of the mis-use of metrics… it’s much harder to come up with solutions.
Bregman’s prescription for change is radical. He exhorts us to be “unrealistic, unreasonable, and impossible”.
And I completely agree. But I also think that to change the world we also have to be realistic, reasonable and think about what’s possible.
(What can I tell you? I’m a mess of contradictions.)
So it’s easy to be a critic. It’s harder to think through the problem of evaluation creatively and constructively. But that’s precisely what we need to do if we are to take a properly holistic approach.
There are already plenty of bright minds thinking about this.
The UMC Utrecht is one of a number of institutions that has reformed its hiring and promotion procedures to create practical space for qualitative elements in assessments. Applicants have to write a short structured essay addressing their contributions on 5 fronts:
- teaching and mentorship
- dept citizenship
- clinical practice
- entrepreneurship and public engagement
This gives reviewers information in a structured, consistent and concise form that embraces quantitative and qualitative aspects of academic achievement. It is an honest and practical embrace of the principle the ‘science quality is hard to define and harder to measure’.
We are never going to escape the problem of research assessment – a world in which resources are finite demands it of us (to say nothing of public accountability). But while we will always be tempted by metrics – to the simplifying power that they seem to offer – we have to resist this. As Gawande writes: “We yearn for frictionless, technological solutions. But people talking to people is still how the world’s standards change.”
So I am glad that we are talking about this today. It is only through discussion – in good faith– of what really matters to us, through negotiation and and through the recognition of the importance of good judgement that we will come to a better definition of what success in science looks like.
I look forward to continuing that discussion in the Q&A.