The fortified ivory tower

Part of a white tower with angular features
Photo by Bas van Honk

This year sees the English higher education sector writhing in the throes of a huge academic quality-assessment exercise. Known by the suitably bureaucratic-sounding title of “Research Excellence Framework” (REF), the process involves most researchers from publicly-funded universities submitting their best work to be pored over and evaluated by a large panel of experts. This is not a meaningless exercise—far from it. The final ratings assigned to the research output of each university will determine how large a slice of a multibillion-pound pie they will get.

According to the Higher Education Statistics Agency, the body responsible for collating the information, this “pie” totalled more than £1.9bn in 2012–2013, but even so, it constituted only 29% of universities’ research-related income. Aside from these recurrent grants, which are divvied up based on the results of the REF, the rest of universities’ research funding comes from grants for particular projects, which are generally allocated in a competitive process before the research has begun. This kind of funding amounts to a hefty £4.7bn, about half of which ultimately comes from the UK government, with other notable contributions from the EU and from charities.

Individual research projects typically cost between a few tens of thousands and a few million pounds, depending on their scale and the resources required. Academics bid against each other for this money, often spending months assembling a research plan and a strong case to support it, which is then scrutinised by peer review and ranked against other proposals. Senior professors often spend more time on writing grants than anything else, and grant income is a standard measure of academic performance.

And yet, acceptance rates for UK research councils grants hover at around 30%. They would be even lower were it not for the councils’ recent drive to manage demand by punishing investigators who submit many unsuccessful proposals. That means that 70% of the very considerable effort that goes into this process—not just from academics themselves but also in all the administrative support that goes along with it—is essentially wasted.

In theory, the up-side of this arrangement is that it forces academics to plan and structure their work carefully, provides additional accountability for researchers and funding organisations, and pre-emptively cuts off mediocre research before it has begun. But academic research is dynamic, and operates within a constantly evolving context, and so even the most carefully planned project is unlikely to progress exactly as the grant-writer foresees. By the same token, an eye-catching proposal doesn’t guarantee high-quality output: there is a clear incentive to talk up the potential of a project to make it stand out, and thoroughness and rigour are not very sexy.

With faculty members tied up, writing about possible future research, the more immediate task of carrying out the job in hand falls to PhD students and untenured post-docs, who tend to provide a great deal of intellectual input to the implementation of the project but receive relatively little credit for it. Their currency is papers, preferably in prestigious high-impact journals like Nature and Science, which aid in their own competitive struggle for the small number of tenured positions available. If that quest is ultimately successful, they too will have the honour of ploughing most of their time into writing grants.

Another problem with the competitive grant process, which focusses almost all credit on a single principal investigator, is that it is fundamentally at odds with how research is actually carried out. Modern science, at least, almost always involves collaborators with different specialisms. There is just too much relevant knowledge for one person to be familiar with it all. Good science is therefore almost always the work of a team, but the senior members of that team are simultaneously attempting to carve out their own little fiefdoms. I am convinced that this tension is both inherent in the current system and damaging to the quality of research output.

Moreover, there are warning signs from academic high achievers of the past: Nobel laureates declaring that they would not have survived in the current system; and credible arguments being made that few really important innovations are emerging from modern science. High productivity, as it is typically measured in contemporary academia, does not entail a stream of genuinely important innovations or discoveries—and in some cases the opposite can be true.

Peter Higgs in front of a laptop
Peter Higgs (centre), winner of the 2013 Nobel Prize in Physics, has stated that he would not be able to get an academic job now, due to lack of productivity. Photo © European Union 2012 — European Parliament (used under licence)

Finally, there is evidence that the competitive grant-getting process helps to buttress the gender inequality in academic success: often less aggressive at self-promotion, women can find themselves at a disadvantage for reasons unrelated to their level of ability or dedication.

So here’s a suggestion: why not abandon competitive government-funded research grants altogether, and link all funding to research output instead? The government could simply say: “Here’s your allocation; go away and produce the best research you can, as a group, over the next five or so years, and then we’ll assess that to decide your next allocation”.

This would cause a shift in the pattern of incentives. With grant writing success no longer an end in its own right, and external oversight focussed around assessing output quality, investigators would have a clear motivation to assemble the best team of collaborators available to actually produce good science, and not just talk about it. Assessment would be primarily at the university or department level, thus creating a more favourable environment for productive team science. And strong practical research skills would regain their value alongside, not secondary to, the ability to communicate the importance of that same research.

Clearly, getting research output assessment right is crucial in this arrangement—and perhaps the REF and its ilk is not the best way to do that. But that’s a discussion for another time.

Of course, policymakers would lose their main channel for focussing research effort, and the ability to quickly get an overview of what research is going on. But there are other, far less costly ways to compensate for these losses: research “bounties”, for example, might be a positive way to incentivise focus on areas of interest.

To be clear, I fully support transparency in academia, but I also think it can be achieved much less wastefully. Oversight is important, but it does not guarantee that final outputs will be of the highest quality, and it is influenced by politics at various levels. Does the “risk” that the odd blue-skies idea may come to nothing really offset the certainty that 70% of grant-writing effort will be wasted? Is competition more important than collaboration?

I want to inspire the junior researchers who work with me. I don’t want to feel like I’m exploiting them to pursue my future success in a zero-sum game. Of course, some administrative duties are bound to come with promotion in any modern institution, but a potential future full of academic politics and intellectual empire-building is a turn-off for many PhD students and post-docs. So let’s reconsider what’s important, and how to really get the best bang for the taxpayer’s buck.