This is a repost of an article that was originally published on the Research on Research Institute website. Comments welcome!
It is a truth universally acknowledged that scientists who take greater risks are more likely to make important discoveries.
Actually, I’m not sure it is a truth, and I don’t know if it is universally acknowledged – I haven’t looked closely enough at the evidence – but it is a long standing and widely held assumption. However, research funders struggle to operationalise this assumption and the question of why that is the case is the focus of a stimulating recent paper in PLoS Biology from Kevin Gross and Carl Bergstrom on “Rationalizing risk aversion in science: Why incentives to work hard clash with incentives to take risks”.
Gross and Bergstrom bring an economic perspective to the question that provides an insightful framing for some of the wider issues that it raises. Here’s their abstract (with my emphasis and annotations):
Scientific research requires taking risks, as the most cautious approaches are unlikely to lead to the most rapid progress. Yet, much funded scientific research plays it safe and funding agencies bemoan the difficulty of attracting high-risk, high-return research projects. Why don’t the incentives for scientific discovery adequately impel researchers toward such projects? Here, we adapt an economic contracting model to explore how the unobservability of risk and effort discourages risky research. The model considers a hidden-action problem, in which the scientific community must reward discoveries in a way that encourages effort and risk-taking while simultaneously protecting researchers’ livelihoods against the vicissitudes of scientific chance. Its challenge when doing so is that incentives to motivate effort clash with incentives to motivate risk-taking, because a failed project may be evidence of a risky undertaking but could also be the result of simple sloth (I would add “or incompetence”). As a result, the incentives needed to encourage effort actively discourage risk-taking. Scientists respond by working on safe projects that generate evidence of effort but that don’t move science forward as rapidly as riskier projects would. A social planner who prizes scientific productivity above researchers’ well-being could remedy the problem by rewarding major discoveries richly enough to induce high-risk research, but in doing so would expose scientists to a degree of livelihood risk that ultimately leaves them worse off. Because the scientific community is approximately self-governing and constructs its own reward schedule, the incentives that researchers are willing to impose on themselves are inadequate to motivate the scientific risks that would best expedite scientific progress.
The authors’ analysis relies on a mathematical model which I confess I did not completely understand*, so I’ll spare you the details; (those who are more mathematically challenged will still get a lot from the paper if they confine themselves to the Introduction, the box on Hidden-action models and the Discussion). What I did understand of the model in terms of the codification of risk, reward, scientific value, effort, resources and utility and the explanations of simplifying assumptions seemed reasonable and, notwithstanding the obvious risks of confirmation bias, the two key conclusions that emerged from it resonated with my sense of how research decision-making works:
“[…] scientists seem either unable or unwilling to devise institutions that motivate investigators to embrace the scientific risks that would lead to the most rapid progress. Our analysis here suggests that this state of affairs can be explained at least in part by the interaction between two key structural elements in science: the unobservability of risk and effort on the one hand, and the self-organized nature of science on the other.”
The “unobservability of risk and effort” is more or less self-explanatory and accounts for the reliance on outputs such as publications as markers of achievement, which are to some degree beyond the control of the researcher. Demand for these outputs in the absence of an assessable record of the intelligence and invention brought to any research effort, is what leads many researchers to opt for safer, less-risky projects that are more likely to result in a paper, albeit one that reports a more incremental finding**.
The role of “the self-organized nature of science” needs a little more unpacking for those who haven’t read the whole paper. What Gross and Bergstrom mean here, I think, is that because the highly specialised nature of scientific endeavours relies so heavily on peer reviewers in the assessment of funding proposals, the key decision-makers have a strong internal sense of the risks attending project failure. They are therefore less willing than a hypothetical social planner charged with maximising scientific productivity to subject applicants to a reward regime that more punitively disfavours incrementalist approaches. The authors argue further that such hypothetical social planners cannot emerge in the first place because they would have to depend on researchers to determine how to value outcomes and would effectively morph into conduits for the collective view of the scientific community.
Several thoughts and questions occurred to me in the immediate wake of the paper’s findings. They are not fully formed, so I offer them only in the interest of provoking further discussion.
Any researcher who has been on the receiving end of a paper or grant rejection – which is pretty much every researcher – would be forgiven for asking themselves how much more punishment they deserve at the hands of Gross and Bergstrom’s social planner. Could it be that the current balance of risk and reward achieves the maximal level of scientific productivity that is commensurate with the desire to accord researchers a reasonable work-life balance? Personally, I don’t think I ever achieved any kind of balance during my time as a jobbing academic trying to carve out a career in research. Current efforts to incorporate research culture and the quality of the lived research process as part of assessment exercises spring from long-standing concerns about the risks and stresses imposed on researchers. One cost not discussed in any detail in Gross and Bergstrom’s analysis is the human cost (though I appreciate they deliberately narrowed the parameters of their model to answer a specific technical question and the human cost factors into the appetite for risk).
Also, is it so difficult to uncover hidden effort? Better line management within research performing organisations could track effort, and perhaps even reward it directly if sufficient intramural funds were available. Researchers would also feel less of a sting from failed grant applications if funders were more open about the uncertainties in decision-making processes that ultimately rely on human judgement; feedback that clearly flags applications assessed as of fundable quality but for which funds were not available could provide some measure of career protection for researchers back at their home institution.
Although Gross and Bergstrom dismissed their hypothetical social planner, Daniel Sarewitz has argued powerfully and provocatively for a more managerial approach to the organization of research, in part to tackle what he sees as the perverse inefficiencies arising because science is permitted undue freedoms to self-organize.
Elsewhere, Michael Nielsen and Kannjun Qiu’s long but very worthwhile essay “A Vision for Metascience” casts Gross and Bergstrom’s social planners as risk-taking “metascience entrepreneurs” empowered to achieve “scalable change in the social process of science. Their ideas for incentivising risk include (among many other interesting but as yet untested suggestions) funding by variance, where grant applications are funded not by being high scoring among reviewers but by polarizing opinion; or failure audits, where grant programme managers are fired if the failure rate of their funded projects drops (yes, drops) below 50%.
Conceivably, the UK’s recently established Advanced Research and Innovation Agency (ARIA) also embodies an alternative form of the social planner. Modelled on similar Advanced Research Project Agencies in the US (e.g. DARPA, ARPA-E, ARPA-H), ARIA is essentially run by programme directors in a range of topic areas who have the authority to select and fund projects they believe will result in the most significant breakthroughs. I’m not aware of any study that has quantified the scientific productivity of the longer-established American agencies in comparison to more traditional mode of research funding, but their anecdotal (?) reputation was sufficient to persuade the UK government to bet on ARIA. I share the view that this a reasonable bet for the UK to take, even if the funding agency has yet to develop robust criteria to demonstrate its own worth.
A last thought on productivity to throw into the mix: a hidden assumption of the Gross-Bergstrom model is that all research funding is awarded competitively. But what would be the impact on productivity of an ecosystem where researchers were provided with a basic or background level of funding, enough for a single postdoc or research technician, guaranteed for 10 years, in recognition of their hard work? As I’ve argued elsewhere (and some time ago), such a regime could boost the productivity of the research funding ecosystem by reducing the wasted effort of submitting grant applications to funding systems that have chronically low success rates. Ghent University’s introduction of a form of universal basic research funding is a tentative, small-scale step in this direction.
There are further, broader questions raised by Gross and Bergstrom’s paper. For example, is an analysis centred on individual economic actors weighing the balance of risk and reward in deciding which research projects to undertake the best way to explore questions of scientific productivity? What does it have to say about the impact of different institutional models, which not include not only the APRA/ARIA approaches mentioned above but also experiments in Focused Research Organisations (FROs), innovative academic-industrial fusions such as Altos Labs, or new types of research institution, such as Arcadia Science or Astera?
Finally, I’m not sure how useful it is to talk about maximising or optimising the productivity of science, given the immense diversity and complexity of its processes, outputs and impacts. That’s not to say that discussions of how to improve scientific productivity should figure out optimisation before any policy decisions can be made. Dare I suggest that more incrementalist approaches to improvement represent a more realistic and promising approach? We need to start somewhere – some already have! – and policy makers, no less than scientists, should be prepared to take risks. We will just have to work out the evaluation methods as we go.
I am grateful to James Wilsdon for a critical reading of a first draft of this blogpost.
Interesting stuff. I had completely missed Ghent’s “universal basic research funding”, which seems a promising idea for lots of reasons.
Have you come across Adam Mastroianni’s ideas in this area? He is provocative and I think constructive. See for example https://www.experimental-history.com/p/lets-build-a-fleet-and-change-the