Moving Beyond a Silo Mentality

Wherever I turn currently I seem to come up against the questions that assessing interdisciplinary research throws up. Nature recently had a special issue highlighting some of the challenges and rewards, but taking a very broad brush approach. Its editorial headed ‘Why interdisciplinary research matters’ rather implied the discerning reader might never have considered its importance, which I very much doubt. Collectively researchers are in no doubt of the significance of interdisciplinary projects, whether or not they personally choose to pursue that line.

I am interested in encouraging specific actions that may make the playing field more level between monodisciplinary projects and those which reach out beyond traditional boundaries. I will take it as read that a large number of topics (say, sustainability, energy solutions….) require multiple disciplines to be involved. I am surprised by those – including funders – who behave as if it was still necessary to encourage people to dare to dip a toe into the waters of inter-/multi-/pan-disciplinary research (I am not going to worry about these linguistic distinctions here). Of course that doesn’t mean that every project needs or could be like this; many topics fit neatly within standard labels and as such will be well suited to traditional decision-making panels. I am interested in the fate of projects which can’t so easily be pigeon-holed and what systemic changes might facilitate their success. I think there are a variety of distinct issues involved which often get conflated.

Some schemes require sole PIs: the ERC standard grants are like this. The ERC’s own data shows that proposals which tick more than one panel’s remit do not fare as well as those that don’t, a focus of concern. Different approaches have been tried out over the years to overcome this problem; further discussions about new possibilities are underway. So much for thinking that a single research council will cause fewer problems for such proposals, as I once naively did. (As an important aside I hope the Nurse Review will explore how the UK can do more to stop proposals falling down the cracks between research councils that does indubitably currently occur.)

However, for other than large conglomerate proposals (as tend to arise in specific calls as opposed to responsive mode) there will usually be at most 2-3 PIs and what follows applies equally to these proposals as to single PI-type grants. Let me take a specific topic to illustrate the challenges, that of driverless cars. One can imagine a project which requires some engineering (be it algorithms or something to do with manufacturing) as well as some social science such as what the public might be willing to accept. Logically one can see what outline shape such a project might have. One or other aspect might be regarded as cutting edge, or the novelty might lie in the synergy and not in either facet independently. How many times have you heard a panel member say ‘Topic A is really mundane, topic B ditto and yet the whole is highly innovative and amounts to frontier research’? Unfortunately that really isn’t how ‘experts’ tend to behave and yet that may absolutely be the right answer. I fear they are more likely to say ‘Topic A is mundane and I haven’t a clue about B so I’ll give this a low score’. The other only-too-common complaint from referees and panels, particularly where there is a single PI, is that the project is too ambitious because it transcends boundaries. This comment is likely to be heard even if it is clearly stated who the PI(s) will work with to cover some of the more remote bases.

It has been suggested (I have heard this in the context of UK Research Councils) that what one really needs is a panel (and referees) solely consisting of people with a proven track record themselves in performing interdisciplinary research to judge such proposals – although I don’t think any of them have gone down that particular route (charities may have). I have a lot of sympathy with this view because I think such people would be less prone to say that one particular facet of the proposal is pedestrian and therefore the project overall is flawed. This sort of refereeing comment seems to happen with disappointing regularity. In my own field of biological physics it is a well-worn complaint of applicants which as yet the research councils have failed to address or, I believe, even attempted to address in any meaningful way.

Another argument I have heard time and time again, from various different funders, is that a home will always be found for any proposal and that applicants should simply write the best grant they can. I’m sorry, this response strikes me as completely inadequate (I have been reminded of it just recently in a context that is outwith the research council system). A ‘home’ means simply that the funder is prepared to put the application in front of a panel, not that the panel is well configured to deal with it on equal terms with a monodisciplinary application. Having someone judge a proposal is simply not the same thing as having someone judge a proposal competently or fairly.

So, as I head off to yet another group who will be discussing this problem, having spent an evening last week over a glass of wine arguing the same point with a yet another funding agency, it seems to me the problem persists in the face of the oft-stated desire by scientists, funders and politicians (in the context of societal benefit) alike to encourage such cross-cutting research. It seems to me that the problem should be broken down into several stages, none of which are tantamount to needing to encourage more people to get out of their silos. I think there are plenty already out there wanting to do just this, although (given the challenges) early career researchers may well feel nervous about making the attempt. Anyone trying the interdisciplinary venture may simply become frustrated by the funding landscape and end up trying to modify their proposal deliberately to obscure its inherent multidisciplinarity.

Here is a list of the challenges I can readily identify at different stages of the process. Others may want to add in more:

  • Referees do not explicitly identify the parts of the proposal they don’t feel competent to judge. They may make sweeping judgements based on only partial understanding.
  • Likewise they may fail to recognize that novelty sits in bringing two quite standard approaches together to create something new.
  • Panels can find it hard to reject or nuance such flawed referees’ reports.
  • Panels can be very conservative and risk averse, preferring ‘safe’ but possibly incremental research which has received enthusiastic referees’ comments; where one project fits centrally into a remit and another is peripheral, such conservatism (I tend to think of it as ‘regression towards the mean’) is liable to favour the former over the latter.
  • Attempts to bring in external panel members from a different panel in some ad hoc way (I know BBSRC used to do this and may still) their views tend to be downplayed: I have seen internal evidence to support this statement.
  • In some cases proposals may be passed between funders, or (where two or more funders co-fund) one party may have right of veto despite only appreciating a portion of a proposal.

The Nature Special I refer to at the start of this post remarked in passing that industry does not have a similar problem. Nor, from my previous experience, do research institutes when using their own internal funds. When, over a decade ago, I was involved with the governance of the Institute for Food Research in Norwich, it was very clear how, being driven by projects with an over-arching aim, where nutritional and textural aspects might mingle or the impact of genetic modification of a crop on downstream processing, they could bring teams together in a fluid way and all that mattered was the outcome. Maybe the textural characterisation was entirely pedestrian but if in an attempt to achieve better nutrition (e.g. a lower fat content in a food) concomitant with acceptable mouthfeel such characterisation was necessary, then so be it.

We collectively need to find ways of moving to a problem and not discipline-based judgement so that all researchers have the same opportunities to get beyond traditional (sub-)disciplines and solve the problems that need solving and not the ones that fit some old-fashioned idea of what physics, or engineering, or biology look like.



This entry was posted in Interdisciplinary Science, Science Funding and tagged , , , . Bookmark the permalink.

6 Responses to Moving Beyond a Silo Mentality

  1. Robert Insall says:

    (speaking as someone who holds a certain number of multidisciplinary grants, and judges many others) – you miss a real problem here. In many cases, probably the majority, topic A is mundane, topic B is mundane, and the synthesis is also totally mundane. Many “multidisciplinary” proposals are nothing more than a mixture of buzzwords from one field applied to another. In others the synthesis is way better than the parts in the exactly the way you describe.
    The trouble is it’s very, very difficult to tell the good ones from the bad. You need referees and panelists that can really see the spark of creativity – in other words exactly the problem that all the other peer review has – and being a senior professor and RC panellist is no guarantee of being gifted at this.
    Your piece paints excellent crossdisciplinary work is often being proposed and not funded, which I’m sure is sometimes true, but I think there’s just as much a problem of people claiming to be crossdisciplinary and being given the benefit of the doubt by funders who want to be convinced.

    Practical solutions? Mine would be to emphasize training of young, genuinely multidisciplinary postdocs and PIs. A collaboration between two big established bigshots might be brilliant, but probably won’t be, and I can’t tell the difference – but someone with a physics degree and a cell biology postdoc will stand a much better chance, in the future. I think panels need to be conservative and risk-averse with established scientists proposing huge projects, but warm and generous to postdocs and new PIs and other startups.

    PS I totally agree with the last para of your piece…

  2. Ben Hall says:

    I think that the synthesis issue raised by Insall is real, though I’m not sure I see it the same way. There will always be issues around the jargon used in multiple disciplines, and this can add to problems assessing ID work. Making allowances for one discipline by breaking down one set of concepts takes time and space. In turn this can lead to the grant author playing “guess the reviewer” games which come unstuck when put in front of an expert who already knows it. ID work does however need to be more than a pair of applications from alternative fields joined together with a plan to share data. There needs to be evidence of feedback between the domains for the work to make sense.
    I think it’s worth noting too though that ID work exists on a spectrum. Lots of people use computational tools in environments which didn’t previously require them, but their workflow is dominated by approaches traditional to the field. Similarly, very few people achieve outputs that are relevant to multiple distinct domains. The harder work can also be a harder pitch to make, and may be less likely to be supported.
    Overall though I believe that this is not necessarily something which can be easily changed at the level of councils, but rather through a change in reviewer culture. I’ve heard representatives state that (for example) the EPSRC supports and wants to receive ID applications, but that ultimately success is down to reviewer judgement. Alongside a perception in the community that in this example the funding body won’t support research related to cancer, this suggests that the reviewers may be enforcing the divisions more strongly. If a reviewer doesn’t like the work ultimately there’s not much that can be done, but for others explicit instructions to place value on ID work would support the work.

  3. Monica Gonzalez-Marquez says:

    Another issue, which affects not only interdisciplinary grants, but which is particularly damaging in this context, is incorrect assignment to reviewers. It is often a problem with panels that those assigned to distribute proposals for review don’t have enough knowledge about the issues in question to send proposals to the correct specialists. Two examples off the top of my head are basic vision science being sent to ophthalmologists for review, and one I recently suffered through myself: an interdisciplinary project on spatial cognition, localization reflex, sex differences in cognition and non-linguistic effects of bilingualism being sent to a gender studies specialist. The project was, of course, rejected. The grounds were that we did not address gender as a social construct. In this case, much as the reviewer is likely guilty of hubris for failing to withdraw, truly at fault is the person who made the assignment in the first place. There was a second reviewer with an appropriate background, and they recommended funding. I suppose that should give us some solace.

  4. I’m really pleased to read that other people also see this problem.
    There is an important point not touched by this post, however. Multidisciplinary researchers (not only projects) like me have to face the proverbial “lack of a track record in the area” feedback accompanying a grant rejection.
    I’ve got grant applications rejected on such grounds.
    I’ve carry out such projects regardless, but without funding and with heavy teaching (i.e. no funding =more teaching) it can take ages, specially when outside one’s “core” area.
    It would be great to have a grant judged on its quality rather than the track record of publications of the researcher, when the methodology/expertise necessary is easily developed/attained.

  5. Chris S says:

    I now work in Research Development at a UK research-intensive University after time in industry and previously as a post-doc in an interdisciplinary area (between Chemistry and Physics). Supporting interdisciplinary, challenge-led research proposals are a key part of my role and what always strikes me is the huge amount of effort is required to pull such proposals together (from all sides). This effort doesn’t seem to be balanced with the potential reward when we consider the range of structural and procedural issues that are highlighted in the article around reviewing and selecting which proposals to fund (particularly with the Research Councils and particularly for responsive-mode applications).

    It is disingenuous for funders to say that a home will eventually be found for interdisciplinary research given that, to be successful in today’s highly competitive environment, applications now need to be written to appeal to the particular funder, likely reviewers and and panel members targeted. For example a proposal within BBSRC’s remit, but with an EPSRC ‘spin’ (i.e. one that has been transferred from EPSRC) is likely to be criticised by someone within the BBSRC process for this very fact. As funders are increasingly adopting a ‘one shot’ policy for applications, this clearly runs the risk of missing great science because it came in the wrong coloured box.

    My time in industrial R&D (working with engineers of all hues, geologists, atmospheric scientists, computer scientists, mathematicians, materials scientists, and so on) has convinced me that if you want to fund interdisciplinary research you need a multidisciplinary panel made up of people who are used to working across disciplines. This is the norm in industry and it means that people a) understand some of the jargon and underlying principles of other disciplines and the value they bring and b), more importantly, are more aware of the boundaries of their own expertise.

    I think the issue of apparent ‘pedestrianism’ in elements of interdisciplinary research is a tricky one. It is tricky because there is a very fine line between expanding how techniques in one discipline can be applied to solve problems in another (what I would consider research) and merely providing a service to support research in other disciplines (still important, but needing to be packaged in a different way to research activity in applications). For example, I may need chemical analysis to provide data for a particular interdisciplinary project, but it will be using “mundane” and established techniques with no need to modify them – I probably would want to include a recognised analysis service provider, rather than a CoI who specialises in developing new analysis techniques.

  6. Ben Hall says:

    I’m curious about people’s views on where engineering (software or otherwise) fits into ID work. Algorithmic discovery is a CS research task, so fits well into ID work. It’s always been a bit harder for me to see how programming driven projects are inherently ID, even if the developed software is intended for biological simulation (for example).

Comments are closed.