Blue-sci research

Resolving the THE debate requires research into blue-sky research

Last night, the Times Higher Education ‘Blue skies ahead?’ debate brought together government science minister Lord Drayson and a panel of young scientists, including Oxford Physics’s own Suzie Sheehy, and the Twittersphere, to discuss the future of UK research funding.

The discussion was fairly unfocussed and more than a little ranty, as a handful of disgruntled scientists and teachers proceeded to lambast STFC, the new ‘impact’ assessment integral to grant applications, and science education and outreach. One couldn’t help but feel a little sorry for Drayson, who seemed rather to have been the victim of an ambush.

The biggest and most interesting question up for debate was that of how we should go about allocating money to scientific research. However, though there was plenty of unnerved squirming over research grants drying up, no-one addressed the big question of how to justify how much funding science should get, and how we should divide that between disciplines.

The biggest target of ire was (probably) the new-fangled necessity to justify the ‘impact’ of your research as part of the grant application process. Drayson justified this by saying that the statements provided helped fight the corner of researchers: ‘Impact assessment,’ he said, ‘is needed to help defend the science budget against those who would rather spend the money on something else.’ The question, of course, is how many of the hundreds of thousands of words of impact assessments written will actually make it into a given parliamentary debate—or, less cynically, how we can condense the reams of qualitative information provided into a useful measure of the benefits of our aggregate research strategy.

Many of the comments from scientists deploring the introduction of ‘impact’ assessment seem to be coming at it from the perspective of the persecuted: the implication seemed to be that this new criterion would see their research being cut. Firstly, this confuses me: does every scientist think that they are doing research with abstract and unquantifiable benefits? Is there a crack army of buzzword-tastic, short-term impactful, applied researchers waiting in the wings to come in and snatch all the funding from beneath the highly theoretical old guard’s noses? Since there is no accompanying overall cut in research funding—other than, with unfortunate timing, the ones which were coming anyway—why is everyone expecting that it’s their research which will be dropped?

It seems to me that the most likely outcome won’t be a significant restructuring of the research landscape: surely if you have the expertise to propose a research programme and the lab to back it up, writing a couple of pages about why your research may have ‘impact’ isn’t much of a challenge—and, given that this will be peer-reviewed by sympathetic scientists, explaining that your research is fundamental and hard to quantify will probably elicit a degree of sympathy; scientists understand that basic research is inherently unpredictable.

So, then, if this isn’t a big deal, the question is why we’re bothering at all. The vocal part of the science community, in this debate at least, want evidence that this ‘impact’ thing will help science. Drayson hits back that he wants evidence it will harm it, and scientists hit back back, saying that we can’t prove a negative.

What we need, if we’re to answer the big question of how to assess research money allocation methodologies, is some kind of metric. Against the view popular amongst scientists that some research outcomes are ‘priceless’, or at least totally unquantifiable, we must contrast the pragmatic need to assess how much funding science should receive overall versus defence, health, education and, ultimately, private expenditure as moderated through taxation; and then, how that pot should be split between physics, chemistry and biology, obviously-applied and possibly-useless, and so on. We need a way to measure the benefits of research—with evidence-based, probably-enormous, non-Gaussian error bars. If such an exercise is totally futile, let us find out by the scientific method, and not simply make hysterical objections to a well-intentioned, if possibly ill-founded, government initiative. We need to be able to make an objective assessment of impact statements versus the current system versus putting all the grant applications in a big spreadsheet and throwing darts at it…and so on. Without some numerical evidence, the debate degenerates into status quo bias and soundbites.

If such an assessment does indeed turn out to be impossible—and it’s certainly not inconceivable that it would be—then we need to ask ourselves the complex ethical question of what society is morally obliged to do when we don’t know what to do.

On a less intellectually grand note, I was also a little confused by all the comments regarding outreach—no-one seems to be able to get funding for their ‘out-of-the-box’ youth inspiration schemes. Call me woefully inside-the-box, but I don’t think it’s practical to take every A-level student over to CERN, and I can’t see many ways of engaging young people which aren’t basically talks, leaflets or posters. And, anecdotally, our talk and leaflets, explosions and beachballs science show Accelerate! got some dosh from none other than the squeezed STFC.

And finally, to finish with a dash of cynicism, Lord Drayson: though I am falling foul of my own strict criterion of requiring evidence, might I suggest that to be taken seriously by scientists, phrases like ‘a more flexible framework for assessing excellence’ should be purged from the lexicon at all costs!


  1. Martyn says (23:00 01/12/2009)

    I think the 'outreach' stuff was an important point. The 'playstation' generation just don't get the stimulation within the classroom environment... Leaflets, posters are not the answer. Demonstrations are a start, but nothing compares to seeing science in the real world. I agree CERN trips ain't practical, but local alternatives exist. Everything from analysis labs to observatories and university facilities. If you want the next generation of scientist to be the best, they must be enthusiastic about the subject, this is where I think 'outreach' programmes will play a key role.


  2. Philip Moriarty says (00:45 02/12/2009)

    Hi, Andrew.

    Responding to your tweet: "@PMoriarty_ So why are you anti-HEFCE-impact? Leave a comment on the article! :) " ::

    I could leave a very long comment here but I'd simply be rehashing arguments that have been put forward time and time again re. the HEFCE/RCUK impact agenda (which can be traced back to the 2004 "Next Steps" Treasury Science and Innovation document). Leslie Ann Goldberg at Liverpool has put together a very helpful webpage with many of the key papers and arguments - see .

    My particular concerns relate primarily to the corporatisation of university research and the associated erosion of core academic principles. See, for example, the papers listed below (and references therein).

    As you may have noticed at the debate on Monday (I was the mouthy Irish bloke at the back of the theatre), I get intensely wound-up by the continued accusation that those who criticise the impact agenda are ivory tower academics who don't care about return on taxpayers' investment. I care immensely about return on taxpayers' investment. What RCUK and HEFCE are implementing, however, goes hand-in-hand with Mandelson's vision of a business-led academy and will ultimately be to the detriment of publicly-funded science.

    - "Reclaiming academia from post academia", Nature Nanotech. 3 60 (2008); Free version available at

    - "The Economic Impact Fallacy", Physics World June 2009 -

    - "Mandelson fails to understand how science is done", Letters, The Independent Nov. 23;

    - "Public Science: A Public Good?", Nanotechnology Perceptions 4 101 (2008);

    Best wishes,


  3. Philip Moriarty says (00:50 02/12/2009)

    P.S. Entirely agree with your plea that certain empty phrases be purged from the lexicon. Is there any more nebulous a term than "excellence"? How many universities have mission statements committing them to maintaining excellence in research and teaching. What's the alternative? We strive for mediocrity (going forward)...?

    Best wishes,


  4. Philip Moriarty says (01:22 02/12/2009)

    P.P.S. And on the question of accountability to the taxpayer and corporate ethics, Dr. Drayson has hardly an unblemished record. See


    This is not an ad hominem attack (although I'll admit it might seem like it at first glance!). The key point is that our Science Minister has practically zero background in science but has reached his position in government due to his entrepreneurial skills. Drayson's history has an important bearing on his inability to appreciate just why so many academics are opposed to the imposition of impact criteria in the REF and in peer review. (> 13,500 academics have signed the UCU petition against the REF impact proposals. To put this in context ~ 50,000 academics were submitted for the last RAE).

    Moreover, there are important economic (and ethical) arguments as to why the state shouldn't be in the business of funding near-market research for multinational corporations. There has been a marked increase in Research Council "Strategic Partnerships" with many of these companies, driven by the focus on short term economic impact. Some of these companies don't have a particularly good track record (..cough...) in terms of accountability or ethical behaviour (I'm sure you can list the key offenders just as well as I can. See, for example, for example).

    Every now and again, I poke my head out of the ivory tower...

    Best wishes,


Leave a comment

(please type ‘orange’ into this box)