What Do We Know about the Research Ecosystem?

While funders make decisions about where their money goes, and PhD students decide what to study, with whom and where before they go on to wander along the career maze; while publishers decide who to publish and universities around the world make decisions based, at least in part, on these same editorial decisions, who looks at the Big Picture? There is a need for more understanding of the decisions that are taken where and by whom in the research ecosystem and what the implications of these decisions are as they ripple through higher education and far beyond. A new research institute – the Research on Research Institute, or RoRI for short – was launched this week at the Wellcome building (a key partner) in London , with a wealth of snappily short talks to illustrate the range of issues RoRI might elect to study. Meta-research (scraping – of data – was a word frequently used) is facilitated by new tools, but there are many other tedious longitudinal studies of people, places and impact that will probably need a rather more old-fashioned approach to gathering data too.

RoRIlaunch2

Topics for the day included Priorities, Careers (in which I had the unenviable task of summing up and being ‘provocative’ about the other contributions, the panel is shown in the photograph), Culture (featuring fellow OT blogger Stephen Curry), Decisions and Partnerships. Each, as you can tell, a massive topic with many strands to delve into. Of course the next challenge is to work out just which parts of this vast array of possibilities for study RoRI will concentrate on. Something I think the partners are currently thrashing out.

In the Careers session, we heard James Evans (University of Chicago) consider connectivity of researchers and the impact on research outputs. He described the research system as ‘deeply complex’ with ‘emergent phenomena’. One of the results of his study of the networks formed and the groups that acted as nodes in the networks was that as fields grow new work is exponentially less likely to impact that field radically. In other words, just having more people working in any given area is more likely to lead to incremental research than breakthroughs. As he put it ‘social connection leads to cultural collapse’ and he felt that ‘aliens’ from other disciplines were crucial for new insights. Funders should definitely take note.

Sally Hancock from the University of York talked about data on career pathways following the PhD. (Her co-authored working paper can be found on the RoRI website.) She particularly highlighted how different disciplines have different patterns, and how there are quite distinct differences between the genders. But the data is incomplete. It is hard to find out what happens to those who leave research completely or to analyse more nuanced issues that may impact on individuals. This could of course range from having a disastrous relationship with a supervisor to an equally disastrous relationship with a piece of equipment. Continuing on the theme of PhD students Megan MacGarvie (Boston University) and Kolja Briedis (DZHW, Germany) gave international perspectives on the careers question, also with an emphasis in the former case of international students who travel to the US for study and then may – or may not – return to their home countries thereafter.

My task was, in six short minutes, to bring together these different threads and join them into a coherent whole. But, this session coming immediately after one on Priorities, I pointed out how funders, policy makers and indeed the tax payer to some extent, have big decisions to make about choosing what student(ship)s to fund. I have written before about the danger of the ‘lumpiness’ of the UK’s research council (and specifically EPSRC’s) programme for doctoral training. Bearing in mind the comments of James Evans regarding radical new ideas tending not to come in large fields (he referred to people sometimes as living in a bubble chamber), I think it is worth considering whether such lumpiness is even good for getting the science we need done. We should ask what are we training students for and who are we training. I certainly strongly believe that those who leave research completely (be it academic or industrial) should not be regarded, as they so often are, as failures. We need people educated to think critically, to solve problems and to be aware of context to enter many different professions. These people are not failures.

Perhaps inevitably I picked up on the gender disparities Sally Hancock identified. If the THE is to be believed, I said female students are ‘forced out’ by a hostile environment. I don’t remember saying that but, since my comments had to be produced on the hoof, it is quite possible that I did. I certainly said I thought it would be useful to carry out exit interviews for students as they complete their PhDs, to find out what went right and what went wrong. Such qualitative data would be invaluable to help interpretations of the quantitative data that is so much easier to collect.

Later in the day, we heard from Molly King  (Santa Clara University) on data extracted from JSTOR articles which showed an appalling mismatch between women as last (and so presumed senior) authors and their expected benchmark figure. Building on an article Melinda Duer and I wrote earlier this year I questioned the role journals play in this, by virtue of bias (no doubt predominantly unconscious) in their editorial processes and decisions. I am delighted to say this was picked up by some of the Nature team present and I hope to follow this up with them in due course. Our concerns, as expressed in that earlier article, are that there is insufficient scrutiny given to factors such as relative time in review for men and women, or whether referees’ reports ever contain clear evidence of bias. There are many places where the editorial process collectively shows bias, as has been shown for economics but not, as far as I can tell for other disciplines.

I can only give you a sample of the fascinating talks we heard covering this broad and burgeoning topic. If I’ve whetted your appetite here are some further links to fill you in further with the intentions of this institute (I may not have a complete set here).

Overview by James Wilsdon, Director of RoRI, in WonkHE.

Commentary in the THE.

Blogpost by Richard Jones,  Associate Director, on his own talk at the launch.

This entry was posted in Equality, Research, Science Culture and tagged , , . Bookmark the permalink.

3 Responses to What Do We Know about the Research Ecosystem?

  1. Brigitte Nerlich says:

    The research on research agenda has somehow filled me with fear rather than hope. We know that the current research ecosystem is over-competitive and lacks kindness – see Wellcome Trust. Having more data might help, but how? When trying to find a Nature editorial that James Wilsdon mentioned in a tweet this morning but didn’t link to I found this in Nature Genetics, on a new initiative on meta-research at Stanford called, of all things, METRICS: “METRICS will serve as the home base for the new field of meta-research, which will benefit from a bird’s-eye view of trends across a wide range of research fields to identify what works and what does not. These trends can then be used to inform new models and policies for data sharing and the standardization of research practices.” https://www.nature.com/articles/ng.2972
    This somehow fills me with dread! Perhaps it shouldn’t as the same article says that Nature will try to increase accessibility and transparency. So, there might be some good coming out of all this. We shall see…For research on research to ‘succeed’, everything that it does should be ‘measured’ against one criterion: will it lead to more kindness?

    • Brigitte, I hope you’re wrong! Of course any data can be used for good or ill (‘lies, damned lies and statistics’ etc), but asking some of the questions in themselves is good. What funders, HEIs and so on do with the data is another thing. My own discussions with the Nature team at the launch on gender issues seemed very positive. There is no doubt that without better data any decision can be claimed to be about excellence, based on merit etc and it is only when data proves – for instance – that minorities are disadvantaged is anyone likely to act. It is important thereafter to check what interventions actually have the desired result. I was interested by both Nature and eLife editors talking on Monday about interventions they’d tried which in some cases not only didn’t work but produced perverse consequences.
      So I have my fingers crossed useful data and insight is forthcoming from this initiative leading to a more equitable – and rewarding – environment. Time will tell…..

      • Brigitte Nerlich says:

        Yes, I also thought Nature seemed to rather receptive to using data for good, as you have pointed out, and I expect the same for eLife, I have to say. I only hope that this ‘data scraping’ initiative will not be used to constrain research (in the sense of squeeze the life out of it) in ways that other data-based (metric-based) approaches have done. I am, as always, a pessimist 🙂 One that is frequently proven wrong, I have to say ::))

Leave a Reply

Your email address will not be published. Required fields are marked *