Recently a website calling itself UKRI Observatory published two blogposts analysing information obtained by them under FoI regarding assessments of EPSRC Centres for Doctoral Training. The point the blogpost was making was that it appeared at first sight that many of these highly competitive and extremely financially valuable centres were not performing particularly well. Analysis of what the call-backs, mid-term reviews etc really mean is not easy given the information published. Perhaps EPSRC wanted to do some benchmarking rather than reprimand those running the centres? Or perhaps there genuinely were all kinds of problems which their processes were uncovering. Who knows? But the reality is there is a huge investment of money and resources in each centre and for the cohort of students enrolled in each, much hangs on their success.
However, the issue I want to highlight is slightly different and it is one that has been bothering me for a long time. The emergence of UKRI slightly shifts the framework but does not in the least alter my fundamental question. Who is keeping an eye on the spread of topics and the geographical distribution of the centres? We are producing a large number of students well-trained in, for instance, quantum technologies, a subject which was a particular focus of the 2013 call for centres. In this area three centres, each training ca 50 students over the full 5 year cycle, are to be produced. I would like to think that someone has worked out we really will need 150 students so trained for the workforce of ca 2020 and that the fact that these centres – UCL, IC and Bristol – are all in the south of England gives no one any pause for thought. Excellence of the original bids is all that matters I presume.
But is it? I do worry – and other topics could be chosen that equally are going to produce significant numbers of students in disciplinary and geographically tight areas: quantum technologies is just one specific example from the physics arena – that no one is keeping an eye on the big picture. The comparatively new focus on the quality of training, important though that of course is, mean that we have ended up – in the EPSRC remit anyhow – with a very patchy landscape. Is that optimum? And who is considering it?
What about those CDTs with significant industrial input? How do they fare? In times past, CASE awards with industry – awarded by various different mechanisms – dealt with a situation where a single student worked on an identified and agreed project between a supervisor and a company providing a named industrial supervisor. I supervised many such projects. If the relationship worked well, sometimes a string of two or even three projects with the same company might be set up over a period of years. The industrial end of things got to see results early and influence the direction of travel, usually guided by some agreement that gave appropriate publication and IP protection etc. However, as I understand it, for many of the CDTs where multiple companies contribute to funding of the over-arching cohort, that specific relationship may no longer exist. Instead, there is a general area covered – say quantum technologies again – but companies do not forge a relationship with ‘their own’ identified student. That is a very different sort of interaction which may work well in some cases but certainly not in all. It does not provide the close-knit relationship that in my own experience often worked excellently (although occasionally it got grumpy and frustrating, particularly when companies got sensitive about whether results could or could not be published in the open literature, or when they were unhappy about the speed of progress to solve a specific industrial problem).
It’s not just the EPSRC; other research councils have similar programmes. In a few cases there are even cross-council CDTs. I would like to think that someone – and logically that someone might sit in UKRI although it’s early days yet for them to have done so – is keeping a watchful eye on the totality of training of the scientists of tomorrow. Not on whether each centre is value for money or whether the training is broad (or narrow) enough to equip them for the jobs that hopefully await them in the wider world (not all of them will stay in academia after all), important though those factors are. My concern is that the centres are so large that in between – by which I mean ‘between’ by discipline or location – there are great deserts of nothingness. This may also mean – even in these brave new days of UKRI – that interdisciplinary topics are missed out on too. After all this requires people to join up dots that may be hard to do; yes there are specific calls that cut across research council boundaries, but there are far more cross-cutting topics than these calls have so far covered. Who can make judgements on unanticipated applications that do so? Often the emphasis is being put on the areas that are already known to be exciting (or, more damningly, fashionable) whether mono-, multi- or inter-disciplinary, and this leaves little scope for serendipity and small beginnings. How do we know, with such long cycles and such large numbers of students involved, that we are not missing the nimbleness that used to be possible, to open up new areas in a low-key kind of way and to let a supervisor’s sense of excitement and nose for potential gold occasionally break into unexpected territories?
I was lucky enough to benefit massively from much easier routes to individual students in years gone by. In particular, in the days of BBSRC’s predecessor organisation the Agriculture and Food Research Council, there were studentships competitively awarded each year by each of its committees (standing committees) based on – if I remember rightly – a couple of pages of text describing the project. (EPSRC’s predecessor SERC may have done the same thing, but my memory is less clear on this front.) If a CASE award was sought this had to include the industrial link and financial contribution. As a young academic I could dream up some new departure, sketch out a research plan, and keep my fingers crossed. Regularly I was fortunate in this competition and could therefore head off in some new direction. It was a delightfully light touch method. Inconceivable in these days of accountability, not least because no one ever checked what happened to the project. If it careered off in some totally different direction there was, as I recall, complete unsanctioned freedom to do so. For me, this was how I got started in much of the work on food, notably starch. This was research that for many years played such a central part in my research programme and ultimately propelled me from food aspects of starch to working with plant biochemists and further into biological physics. If I had had to prove at the outset I had a track record in starch, that it contained excellent physics (it did, but that might have been hard for some of my peers to spot – as they tended to tell me) and knew my direction of travel, this could never have happened. My generation, in this respect at least, had it easy.
But it is not the ease per se I am regretting, so much as the opportunity to make small scale forays into the unknown. A decent project could be carried out even if it was not part of some massive large scale initiative, although there was no one vetting the quality of the education/training. I am worried that in trying to rectify what was clearly a potential failing on this last point, a lot else has been lost. I don’t see who is stepping back to look at the lumpiness of the CDT landscape that has evolved over the recent past and considering whether or not this is optimum for the UK as we set our sights on research expenditure of 2.4% of GDP.