University rankings are fake news. How do we fix them?

This post is based on a short presentation I gave as part of a panel at a meeting today on Understanding Global University Rankings: Their Data and Influence, organised by HESPA (Higher Education Strategic Planners Association).

HESPA University Rankings Panel - May 2017Yes, it’s a ‘manel’ (from the left: me, Johnny Rich, Rob Carthy). In our defence,  Sally Turnbull, who was chairing, sat off to one side and two participants (one male and one female) had to withdraw at short notice. Photo by @UKHESPA (with permission).

The big news on the release of the Times Higher Education World University rankings for 2017 was that Oxford, not Caltech is now the No. 1 university in the world.

According to the BBC news website, “Oxford University has come top of the Times Higher Education world university rankings – a first for a UK university. Oxford knocks California Institute of Technology, the top performer for the past five years, into second place.”

Ladies and gentlemen, this is what is widely known as ‘fake news’. There is no story here because it depends on a presumption of precision that is simply not in the data. Oxford earned 1st place by scoring 95.0 points, versus Caltech’s 94.3. (Languishing in a rather sorry sixth place is Harvard University, on 92.7).

The central problem here is that no-one knows exactly what these numbers mean, or how much confidence we can have in their precision. The aggregate scores are arbitrarily weighted estimates of proxies for the quality of research, education, industrial contacts and international outlook. And they include numbers based on opinions about institutional reputation.

In all likelihood these aggregate scores are accurate to a precision of about plus or minus 10% (as I have argued elsewhere). But the Times Higher (and most other rankers – I don’t really mean to single them out) don’t publish error estimates or confidence intervals with their data. People wouldn’t understand them, I have been told. But I doubt it. That strikes me rather as an excuse to preserve a false precision that drives the stories of year on year shifts in rank even though they are, for the most part, not significant.

Now Phil Baty, the editor of the Times Higher Rankings (and someone who, to give him his due, is always happy to debate these issues) is stout in his defence of what the Times Higher is about. A couple of months ago he wrote in an editorial criticising the critics of university rankings:

“beneath the often tedious, torturous ad infinitum hand wringing about the methodological limitations and the challenges of any attempt to reduce complex universities to a series of numbers, the single most important aspect of THE’s global rankings is often lost: the fact that we are building the world’s largest, richest database of the world’s very best universities.”

But who can define ‘best’? What is the quantitative measure of the quality of a university? Phil implicitly acknowledges this by conceding that “there is no such thing as a perfect university ranking.” I would ask, is there one that is good? Further, if the point is to assemble a database, why do the numbers in the different categories have to be weighted and aggregated, and then ranked? Just show us the data.

The problem, as is well known, is that these rankings have tremendous power. They are absorbed by university managers as institutional aims. Manchester University’s goal, for example, stated right at the very top of their strategic plan is “to be one of the top 25 research universities in the world”.* How else is that target to be judged except by someone’s league table? In setting such a goal, one presumes they have broken down the way that the marks are totted up to see how best they might maximise their score. But how much is missed as a result? Why not be guided by your own lights as to what is the best way to create a productive and healthy community of scholars? Surely that is the mark of true leadership?

Such an approach would enable institutions to adopt a more holistic approach to what they see as their missions as universities. And to include things that are not yet counted in league tables, like commitment to equality and diversity, or to good mental health, or – in these troubled times when we are beset on all sides by fake news – to scholarship that upholds the value of truth.

A couple of years ago, a friend of mine, Jenny Martin, who is a Professor at Griffith University in Australia suggested some additional metrics to help universities complete the picture. For example:

How fair is your institution – what’s your gender balance?
How representative are your staff and student bodies of the population that you serve?
How much of their allocated leave do your staff actually take?
How well do you support staff with children?
And… How many of your professors are assholes?

Now, Jenny may have had her tongue in her cheek for some of these but there is a serious point here for us to discuss today. How often do rankers think about the impact on the people who work in the universities that they are grading?

I would argue that those who create university league tables cannot stand idly by (as bibliometricians used to do), claiming that they are just purveyors of data. It is not enough for them to wring their hands as universities ‘mis-use’ the information they provide.

It is time for rankers to take some responsibility. So, I call for providers to get together and create a set of principles that governs what they do. A manifesto, if you will, very much in the same vein as the Leiden manifesto introduced in 2015 by the bibliometrics community.

To give you a flavour, the preface to the Leiden manifesto reads:

“Data are increasingly used to govern science. Research evaluations that were once bespoke and performed by peers are now routine and reliant on metrics. The problem is that evaluation is now led by the data rather than by judgement. Metrics have proliferated: usually well intentioned, not always well informed, often ill applied. We risk damaging the system with the very tools designed to improve it, as evaluation is increasingly implemented by organizations without knowledge of, or advice on, good practice and interpretation.”

What is true of bibliometrics is true of university ranking. Therefore I call on this community here today to take action and come up with its own manifesto. Since we are in London, we could even call it the London manifesto. (After Brexit, we’re about to become the centre of nowhere and nothing, it would be nice to have something for people to remember us by!)

I stand ready to help with its formulation. I urge you to consider this seriously and quickly. Because if providers won’t do it, maybe some of us will do it for you.

Thank you.

 

A couple of afterthoughts on the meeting:

It was noticeable that the rankings provider who spoke after the panel addressed more of the technical shortcomings and cultural issues of university league tables than those who presented earlier in the day. It is important to keep the debate on rankings and university evaluation alive.

I was surprised that there were relatively few questions after each talk from the audience, which consisted mostly of people involved in strategic planning at various universities. I hope that doesn’t indicate a certain degree of resignation to the agenda-setting power of rankers and, as a result, a reluctance to consider the broader impacts. But I remain concerned. In answer to my question about why one of the providers had bemoaned the fact that some university leaders rely too heavily on rankings, I was told – candidly –  that in some cases he felt it was a matter of poor leadership.

I was struck by an example mentioned by my co-panellist, Rob Carthy, from Northumbria University which pointed out one of the perverse effects of rankings. His university works hard to select and recruit Cypriot students even though they often only do one A level (a feature of the school system). In doing so, however, the average A level tariff of their intake drops which, on some league table measures, will reduce their score. The rankings therefore disincentivises searches for student talent that look beyond mere grades. I suspect they may also be reducing the motivation of some universities to widen participation.

 

*To be fair to Manchester, on this web-page the phrase appears to have been edited to read: “Our vision is for The University of Manchester to be one of the leading universities in the world by 2020.”

This entry was posted in Science, Scientific Life and tagged , , . Bookmark the permalink.

2 Responses to University rankings are fake news. How do we fix them?

  1. Flurin says:

    Does the lowest salary on our salary grid allow the university staff to live in the neibourghood of your university? How far in % is it from minimal wage? How big is the gap between maximum salaries and minimum salairies at the university? Do have researchers time for their research? How would an other university grad your students exams?

Comments are closed.