Thinking globally about research evaluation – LIS-Bibliometrics talk

Last Tuesday I attended the 2019 LIS-Bibliometrics meeting which focused on open metrics and measuring openness. I was part of a panel that discussed the topic “Thinking globally about research evaluation: common challenges, common solutions”. Chaired by Lizzie Gadd from Loughborough University, the panel also included Ian Rowlands (King’s College) and Kate Williams (Harvard University/Cambridge University).

LisBib19 - panel discussion

To break down the topic, each of us was asked four questions. Below are the answers to these questions that I prepared for my opening remarks. I’ve left in the bold-face type the points that I wanted to emphasise.

(Ideally I would attempt here to synthesise the discussion but I’m afraid I haven’t had the time to do so. The central problem remains of constructing processes that can reasonably and fairly incorporate quantitative and qualitative information in research evaluation.)

In your view, what are the main research evaluation challenges we face?

  • Biggest challenge: How do we temper the tendency to competition (and indeed to hyper-competition) with being kind to people? 
  • Next biggest challenge: The pervasive and insidious effects of metrics or indicators. Indicators provide someinteresting information, but the utility of that information is hard to evaluate. We need to be mindful of keeping the numbers in their place, remembering that complex human activity cannot be boiled down to numbers. And that is genuinely hard to do.
  • But I believe it can be done by rooting our procedures for assessment in our values. We need to keep asking ourselves what are for us the most valuable attributes of research and researchers? Along with many others I would like to see those values broadened out. 
  • Yes, we want exciting and innovative research. No question.
  • But we also want to incentivise the sharing & re-use of that research through OA and through sharing data and code.
  • And we want to reward not simply novelty but rigorous novelty – so I would encourage open peer review as a way to achieve that. Not sure it can be done with metrics (a focus on counting citations doesn’t do it).
  • We want to reward real-world impact, where that arises (while also acknowledging that it won’t happen immediately or in all cases). But as we know from the REF, impact is very hard to capture with metrics.
  • Final challenge: We also need to remember the people – the researchers. Too much focus on counting outputs and inputs makes us liable to forget about the importance of collaboration, about team-players (in research groups AND in the teams needed to keep depts running); and to forget about quality of life issues (among which I would include equality, diversity and inclusion). There are some indicators that can capture aspects of quality of life and EDI but I don’t see them being used widely enough.

 

To what extent are these challenges global and to what extent local?

  • They are both.
  • As Assistant Provost for Equality, Diversity and Inclusion (EDI) at Imperial College I am mindful of the significant variation in attitudes to all sorts of topics (including EDI but also research assessment) within an institution – between different depts, never mind between institutions or different countries…
  • Culture is very granular – and the only way to explore it is by talking to people. As Atul Gawande wrote:“People yearn for frictionless technical solutions, but people talking to people is still the way the world’s standards and norms change.” So local discussions with researchers are vital. 
  • But on their own they are not enough. It doesn’t matter if only a handful of organisations, stimulated by DORA or the Leiden Manifesto or whatever, establish a principled stand on responsible metrics and develop robust, holistic and evidence-based procedures for research assessment that are free from bias.This has to be an international project because research and researchers are internationally mobile.

 

Where might an international approach help solve these challenges?

  • As Chair of DORA, obviously I think the organisation has a key role to play. We are the only independent, international organisation that is focused on the reform of research assessment and that is actively campaigning for change.
  • We are doing lots of things:
    • promoting the idea and recruiting signatories
    • collecting and disseminating good practice
    • running workshops to critique and develop good practice in reseach assessment
    • established an international advisory board
  • But there is lots more that we can do, and that we would like to do;
    • critically evaluate what works (we need to provide tools that work or people will fall back on the JIF)
    • build an international alliance and presence in all parts of the world
    • reach out more to arts, humanities and social science scholars

 

How can Bibliometric practitioners work together to help achieve this? 

  • Work with researchers so that you really understand the constraints that they work under. Remember Gawande…
  • Health warnings on the ever-present limitations of indicators: The statement “complex human activity cannot be boiled down to numbers” should be attached to every paper on bibliometrics, every bibliometrics database, and every league table…
  • Not sure if this is your bag but: Be louder in your criticisms of university league tables that aggregate scores into a single number; as presently constructed, most tables are deeply flawed. Anyone who celebrates their institutional ranking is admitting to an alarming degree of intellectual vacuity.
  • Do more to examine and expose the biases present in the numbers.
  • Work with DORA!

 

A final message for all of us:

Let’s get away from the “dodge culture”. We’re all good at signing up to principles and describing the problem. But we are also all good at wringing our hands and dodging the consequences, so that we don’t have to think constructively about workable solutions. I’ve seen that in the debates on Plan S (which has a strong research assessment component) – there are lots of complaints but not nearly enough constructive engagement.

But I’ve gone on for long enough so we can talk about that if it comes up in the questions…

 

This entry was posted in Science. Bookmark the permalink.