Higher Education Through the Looking Glass

I feel as if the Higher Education sector has somehow stepped through Alice’s mirror. Everything is topsy-turvy and has been for some time.  It is hard to know where the next attack will come from. Labour peer Lord Adonis started the university football, with accusations about Vice Chancellors’ high pay and a suggestion that academics didn’t work over the summer (I have already attempted to debunk that latter myth). But more recently everyone seems to want to attack the sector, and particularly Oxbridge, in ways that are not always evidence-based, to put it charitably, with attacks coming from left and right.

David Lammy attacks us for not admitting enough black students. As I argued when he made the same criticism seven years ago, the very means by which he seeks to attack us is only going to act as a further deterrent, overwhelming all the hard work every single Oxbridge College will be putting into its outreach work in schools by giving the message that any BAME candidate who does apply won’t fit in, so why should they apply? My point of view was reinforced by Oxford’s African and Caribbean Society this week in the Telegraph.  WonkHE’s very own analysis shows that the (publically available, no FoI required) statistics do not bear out the claim of bias against such applicants, and demonstrates many universities do proportionately worse .

To make matters even more unpleasant this week in Cambridge, CUSU’s Woman’s Officer  Lola Olufemi became the target of vicious trolling after the Telegraph (again) spun her actions totally inaccurately as ‘forcing a climbdown’ of the dons after her involvement with a letter, signed by many, requesting the English Faculty’s curriculum was broadened to include more non-white authors. The misogynoir abuse she suffered – as a very visible black woman in Cambridge – is hardly likely to help Lammy’s ‘cause’ either, although I didn’t see him speaking out about it. It would have been appropriate. Plenty of people did, including Cambridge’s new Vice Chancellor. The Telegraph published a rather small apology, having run the original story splashed across its front page.

Jo Johnson

Jo Johnson, meanwhile, is saying that universities have to do more about free speech and will be asking the new Office for Students (OfS) to ensure this happens. As the Government website says

‘the Universities Minister has asked the OfS to focus on ensuring institutions recognise the importance of freedom of speech and the role it plays in ensuring open debate.’

But his Conservative whip colleague seems intent on making this harder, by writing to all Vice Chancellors asking them to tell him who is lecturing about Brexit and with what syllabus. Leaving aside the amusing spoof syllabi that floated around the internet by means of reply, Jo Johnson himself indicated that Chris Heaton-Harris’s letter ‘should probably not have been sent’. Quite. But on the whole I think collectively we work hard to permit open dialogue and free speech and it is surprising to be warned we may not be doing enough.

As it happened, the story about Heaton-Harris’ inappropriate request broke the morning after I had given a talk on Brexit as part of the University’s Festival of Ideas. Entitled ‘UK research in troubled political times’ and honouring one of my predecessors as Master of Churchill College Sir Hermann Bondi (a committed European), it focussed on the consequences of Brexit for the research workforce and funding implications as Brexit gets ever closer. A brief write-up can be found on the University website. It felt distinctly sinister to see the Guardian running the story the next morning, with its explicitly McCarthyist headline.

However, I needn’t have worried. In this topsy-turvy world we now live in, the emphasis the THE reporter, present in the audience, gave when reporting my talk was entirely focussed on the couple of slides I included about changes to the UK’s research council structure as UKRI comes into being. Furthermore, although she no doubt correctly quoted me in what she did report, she didn’t even mention the issue I feel particularly anxious about in the UKRI context, namely what is going to happen to interdisciplinary science. Originally given as one of the motivations for creating a super Research Council in the Nurse Review report, as yet there is no clarity as to how such work will be funded (or assessed). Neither the Global Challenges Research Fund nor the Industrial Challenges Research Fund, both of which we have some information about now, cover the fundamental kind of interdisciplinary science which forms the focus of HEFCE’s Interdisciplinary Advisory Panel for REF2021, which I chair and which formed the focus of my uncertainty regarding the new UKRI landscape in my talk.

So the Daily Mail won’t have picked up from the THE story that I am a dangerous Leftie who talks about Brexit (although they might have spotted the latter if they read the University’s own website; I am not, though, or ever have been a paid up member of any political party). Consequently, to the hilarity of my fellow heads of house – several of whom have remarked upon this – I did not appear on their hit list . Nor have I been identified as ruining a graduation dinner by mentioning the dreaded B word (although I undoubtedly did, both last year and this) as Downing’s Master was. But the Daily Mail isn’t, let us admit, always entirely accurate or consistent, a point noted by one of the said wicked Leftie Masters who, two years ago, was fingered by the Daily Mail for being too right wing. Jackie Ashley’s riposte to some of this furore struck a chord with many of us .

Higher Education may well have a larger proportion of Remainers than some parts of the country, such as the Fens, the Welsh valleys or other former industrial heartlands, but is that sufficient to explain why politicians and journalists from left and right have decided we are the target of choice for their hate? A sector which has usually been regarded with pride for competing successfully with the US giants of Harvard, MIT and Yale; a sector which brings billions of income to the economy through direct and indirect means; a sector which educates (I refuse to say ‘trains’) the next generation – black, white, male, female as well as those who reject such binary divisions. We appear to be being targeted because we think for ourselves and believe in looking at the evidence. We even are prepared to change our minds if the evidence warrants it. Perhaps that is why we are feared – and so attacked – in the days of the bigot and closed-mind-politics and journalism.

 

 

This entry was posted in Research, Science Funding and tagged , , , . Bookmark the permalink.

16 Responses to Higher Education Through the Looking Glass

  1. Yes, I agree with most of that. But I do think that universities, almost all of them, have been remiss in failing to do proper randomised tests of their admission methods. It would be easy, for example, to see whether interviews introduce bias. Why hasn’t it been done? The scientific method seems to go out of the window when academics leave the lab.

    • NQ says:

      I love this idea! And surely it wouldn’t take too much funding to try? Some Senior Tutors or similar could club together to analyse results? But then I guess it’s further bad press in the Daily Express or its ilk as soon as you try to talk about them – whatever the results show.

  2. Joseph Conlon says:

    I think admissions data is pored over in great detail.

    Not speaking for Cambridge (although I presume something similar holds) but at Oxford we (physics) get at admissions time the reports on previous admission rounds and what the correlations are between data such as admission test results and interview marks and students’ subsequent performance in university exams, all also broken down in various ways.

    After interview marks are in and before decisions are made we also get the cross-correlations based on the same candidates interviewed at different colleges to identify whether there are colleagues who systematically give ‘hard’ or ‘soft’ interview marks.

    Everything can always be improved, but a lot of thought and quantitative treatment has gone into a process that is both fair and rigorous. It irritates me when politicians (or others) unfavourably compare the Oxbridge admissions systems – in which large numbers of world-class academics spend a week on an academically rigorous selection system – to that at places at Harvard, where there are bonus points at admissions if Daddy (or Mummy) went to Harvard and even more bonus points if Daddy (or Mummy) then gave a million dollars to Harvard.

    • No, I meant properly designed experiments, not just post hoc rationalisation.
      For example, something like this.

      Stratify A level results, accept at random people from each band.  Do interviews: use them as normal for randomly selected applicants, but ignore interview results for the others.  Check the degree results (and subsequent career) for each group.

      This would make it possible to discover the effect of school and family background, and a proper check for bias in interviews. 

      A bit of hard evidence would be far better than a ton of training courses.  Until universities have the courage to test their admission methods, their claims to be serous about the problems won’t cut much ice. I just can’t understand why they won’t do it. But until such time as they do, they will continue to get criticised, and I fear that the criticisms are justified,

  3. I was disappointed to get no responses to my suggestion that admission methods should be subjected to empirical testing.

    • Joseph Conlon says:

      If that’s aimed at me, then in brief I think your suggestion is a non-starter, as it involves taking several years worth of 18-year olds and ask them to participate in an experiment with a 5 year feedback loop between how admissions operates and degree results.

      Interview performance correlates with exam performance. This is known very robustly. They do not measure everything (what does?) but they are measuring part of what goes into a successful degree.

      • I agree with Joseph this simply isn’t a viable thing to do. Over 5 years we would almost certainly see a change in some aspect of curriculum/exam system so we couldn’t even do such a longitudinal study and I don’t think to do experiments on students is reasonable. Currently Cambridge is adjusting to the loss of UMS scores at AS level and the consequent introduction of admission tests. One year’s results indicate the admission tests in nearly every subject aligned with A level results (as UMS scores did before).

        RCTs are the least worst option when you have no other information. But we do, as above. I don’t agree with Joseph that interviews are always a good predictor. Various biases undoubtedly enter into the interviewer’s perceptions and not all are good at ignoring these (eg affirmation bias when someone looks/sounds/has a background like yours). My college pays far more heed to firm metrics than interviews.

  4. @Joseph Conlon
    No, it wasn’t aimed specifically at you, but to anyone who’s interested in the question .
    Yes it would take five years or more, That sort of length is common in medicine. I can’t see that that’s a serious objection.

    All I am saying is that, unless and until, universities show themselves willing to get some hard date, they shouldn’t be surprised if the rest of the world will suspect them of special pleading. And sadly the rest of the world will have a point.

  5. I fear that the responses that experiments are either impossible or unnecessary do sound a bit like the reaction of old-fashioned medical consultants.

    • Then you need to tell me why you disagree with my assessment not simply that you disagree.

      • NQ says:

        I’d be inclined to agree with Prof Donald, mainly because life is never so simple. Anything that would affect students’ chances at getting into a uni (not sure whether that’s what you’re suggesting) sounds unethical. And samples cannot be randomized well at all, since the set of students who put Cambridge as their top choice (in particular, but any other too) is already a self-selected list.

      • The point of my suggestions was based on the idea that an ounce of hard evidence is worth a pound of correlation data. This isn’t the place for a description of how to get good evidence in social sciences, but luckily there’s a very clear description by Haynes, Service, Goldacre and Torgerson , which was written in 2013 for the Cabinet Office Behavioural Insights team. It can be downloaded at http://www.behaviouralinsights.co.uk/publications/test-learn-adapt-developing-public-policy-with-randomised-controlled-trials/

        I think that it’s a really good essay. Unsurprisingly it hasn’t had a huge effect on government, but I’d have expected that universities would have acted on it.

        I’d be very interested to hear the reaction of anyone who’s commenyed hear when they’ve had time to look at it.

        • You haven’t answered me as to why my objections are not valid. (Remember, students are not pawns.) You are also ignoring the fact colleges use plenty of evidence – even if you want to call it correlation – just not the route you favour. As I say RCTs are ideal if you have no other information. Every college does things in slightly different ways in Cambridge and the evidence of how their different processes deliver is of course known; there are plenty of variables there to worry about too. However, the ultimate problem for all of them is that the pool of candidates that apply is not as diverse as anyone would like.

          • I suspect that our different views about what constitutes good evidence may stem from the fact that causality is not a major problem in physics (it wasn’t a big problem for single channel biophysics either).
            But in much of medicine and most of social science it is the major problem. Post hoc ergo propter hoc arguments abound and they are often seriously misleading. The point of RCTs is that they are the only way to establish causality. I hope that by now you’ll have had time to look at Haynes et al. It is a really good account of why randomisation is essential in social sciences.

        • Joseph Conlon says:

          In case you are still interested, I’ve now had a look at this document and don’t find it as valuable as you do. Many of the aspects don’t seem to generalise to university admissions (e.g. rapid feedback and the large numbers that one can found in areas of the criminal justice system).

          I think about the admissions I know most about (to read physics at Oxford). We have about 200 students a year. So even if we divided things so that half were admitted by test alone (say) and half were admitted by interviews alone (say), we only have one hundred in each group. This means that the statistics on performance will never be that good, as the numbers aren’t large enough.

          Now we have to wait a few years to find out how each group has done – during which we have to treat several years worth of 18 year olds as experimental pawns. Now even suppose we find out that those selected on tests did much better than those selected on interviews – then what?

          Well even this isn’t so helpful, because by now five years have moved on and a new education secretary has changed the secondary school system and exam system, and all that hard-won information is now out of date.

          And even if *this* didn’t hold – we already know that some years test or interview scores are much better predictors of degree success than other years, so that result might have been a feature of that year alone. And the reason we know this is that, contrary to what you suggest, there is lots of data about how the admissions system works even if most of it is not public.

          Science departments are full of people who do data analysis for a living and care about ensuring we admit the best students. There may have been a time when admissions was done by an old man in a tweed jacket tossing a rugby ball at a candidate as they entered the room, but I doubt it was in my lifetime.

  6. @NQ It’s precisely because the problem is so complicated that randomisation is essential to disentangle what’s going on. Please take a look at the document at http://www.behaviouralinsights.co.uk/publications/test-learn-adapt-developing-public-policy-with-randomised-controlled-trials/
    That gives a clear account of the principles for non-statisticians

Leave a Reply

Your email address will not be published. Required fields are marked *