First impressions: the DORA-HHMI meeting on research assessment reform

My feet have hardly touched the ground since I got back from the DORA-HHMI meeting on “Driving Institutional Change for Research Assessment Reform” in Washington DC last week, but I wanted to log a few first impressions. I can’t hope to do it justice but please be reassured that the videos of the plenary session will soon be available, and there will no doubt soon be wiser and more considered ruminations on the conference.

Economist, Paula Stephan

From Monday evening until Wednesday lunchtime (21-23 Oct), we gathered in the plush surroundings of the Howard Hughes Medical Institute headquarters to work through the present challenges in research assessment. I was there in my a capacity as chair of the San Francisco Declaration on Research Assessment (DORA). It was a good size of meeting – about 60 participants; small enough so that you could clock the faces of most of the people there over the three days, but large enough to have a good mix of stakeholders, including vice-presidents for research, faculty deans, researchers, librarians, representatives of learned societies and funders, and the odd economist and behavioural designer.

At meetings on the topic of resesarch assessment, there is a tendency for the discussions to be orbit around the same old problem – the pernicious effects of journal impact factors and the chase for position in university rankings. But I am pleased to report that there was a conscious effort here to direct attention to solutions. This effort was nevertheless rooted in a clearly articulated determination to recall the values that are the foundation of scholarly research, which date back at least to the work of Robert Merton, but are also subject to the questions raised by the open science movement and the increased attention being paid to issues of equality, diversity and inclusion in the academy. These values recall not only our duties within the academy as scholars who subscribe to communitarian principles, but also our responsibilities as mentors, leaders, and managers, and as professionals who started out with a keen sense of wanting to make the world a better place.

The problem with values, as became apparent in our deliberations, is that they can too easily disappear amid the everyday pressures to get the job done. An evaluation system preoccupied impact factors, citations, grant income and league tables soon distracts people and universities from their higher aspirations. How best to keep sight of our larger purpose?


Me, trying to keep up with Frank Miedema…

There was a clear-sighted recognition at the meeting that reform of research evaluation has to be thought through in very practical terms. Values on their own are not enough. For one thing, we need to ensure that university leaders are aware of the problem associated with over-reliance on simple metrics. There is too much of a “Yes, but…” culture that pays lip-service to the problems without tackling the root causes.

As we heard, tackling the root causes requires institutions to have honest conversations about their values – perhaps best conducted at more granular – departmental? – level within the organisation where everyone can be involved. This is key to empowering researchers, from PhD student to professor, to take ownership of the problem of resesarch assessment.

But more than that, we need workable alternatives to the traditional metrics of ‘excellence’, assessment mechanisms that take a broader, deeper view of the aspects of scholarship that matter, that embrace not only the diversity of important outcomes from research (the particular quality of the work, as well as associated outputs such as data, software, trained researchers, etc.), but all the other important activities that academics undertake, such as teaching and mentoring, departmental and disciplinary service, and public and policy engagement. Here there were a number of interesting examples of good practice, including the use of biosketches to capture a more narrative-based summary of individual accomplishments, and structure interview techniques that try to probe beyond the desire to generate papers, for example by testing candidates’ commitment to diversity or their abilities as team leaders. Beyond that we had fascinating and provocative discussions on the economics of science, and the importance of deliberate and systemic approaches to organisational culture change.


Dinner and discussion at the DORA-HHMI meeting

The meeting was intense and invigorating, and many of the discussions were tough. At times they were challenging and uncomfortable – an important path to learning (though everyone still seemed to get along in the evening socials in the bar!). Personally, I am grateful for the insights from vice-presidents of research that I spoke to about the perverse and damaging, but inescapable pressures they come under from university rankings.

The realities of research assessment are harsh and it is clear that, despite lots of interesting and innovative experimentation in this area, we are still figuring out what works best. One of the most important lessons of the meeting is that that’s OK. For sure, we will have to evaluate what reforms work best, but it is good that people are already experimenting and innovating. It will take time to find out how best to escape the perverse effects of over-metricisation, but it is already an important signal to the research community and to society at large that we are trying to do better.

I hope that, like me, most of the meeting participants have returned home not just energised by interactions with a people who committed to change, but equipped with an expanding repertoire of ideas about how to make change happen where it matters: on the ground.

If I have seemed lax in crediting ideas to individuals, it is because much of the meeting was conducted under Chatham House rules which prevent such attribution. But I am more than happy to give credit to those whose written work in this area resonated strongly in the meeting. I list a few of them below as suggested further reading on values and practical approaches to research(er) assessment. I will be happy to add further suggestions – leave a comment or ping me an email.


This entry was posted in Science. Bookmark the permalink.