Measuring academic impact: discussion with my new colleagues at the LSE (joining in January, but not leaving Oxfam)

September 26, 2014

     By Duncan Green     

From the New Year, the London School of Economics International Development Department has roped me in to doing a few hours a week as a ‘Professor

Potential role model?

Potential role model?

in Practice’ (PiP), in an effort to establish better links between its massive cohort of 300 Masters students (no undergrads) and ‘practitioners’ in thinktanks, NGOs etc. So with some newbie trepidation, I headed off this week to meet my new colleagues. Some impressions:

Overall, an academic awayday is not that different from the NGO variety. No-one stays on topic or on time; lots of passionate arguments. But at least they’re quirky, eccentric and pretty funny (sample: ‘here you are, a PiP surrounded by lemons’, c/o David Keen). And dead clever of course.

Interesting range of approaches from quants crunching big datasets to philosophical types musing over the origins of mass delusion.

The introduction of fees has contributed to some pretty fierce competitive pressure and greater levels of accountability to students (good thing too). The results of student surveys are compared with rival universities, with lots of agonizing when you fall short (eg on quality of feedback to students). Research rankings, and of course student applications also get a lot of scrutiny.

Academia is under huge pressure, just like the rest of us, and many of the pressures will be familiar to any aid wonk. Notably endless skirmishing on whether to measure the impact of academics’ work, and if so, how. It is fiercely resisted by some, who argue that by pushing academics into areas where they can demonstrate impact, typically on government policy, it risks turning them into glorified consultants.

But it is welcomed by others and in any case, is probably unstoppable – the UK government allocates 20% of its research funding to universities according to evidence of impact (and that is likely to rise in the next funding round, scheduled for 2020), and the vast majority of other funders make similar demands. My advice, drawing on similar discussions in the aid business, is that rather than simply being a refusenik, it is smarter to engage with the measurement agenda and push it towards measuring what matters, not just what’s easy to measure.

In practice, it is the established academics that find it easiest to demonstrate impact: they have had time to establish networks and reputations. Feeding the impact machine could easily end up being something the old guard takes on (albeit with lots of grumbling), in order to let the young bloods get on with building their careers.

If only it was that easy

If only it was that easy

Attribution is a headache: academics are forced to chase down ‘testimonials’ from officials saying how useful their research has been (DFID has a policy of refusing to provide such bits of paper), but these are seen as a last resort compared to citations, quotes in speeches and documents by governments or other targets.

To help assess impact, there are several pretty dodgy (and conflicting) metrics being bandied about adding up individual academics’ publications, downloads, citations and even (love this) ‘esteem indicators’. If the various scores are ever taken seriously, they are crying out to be gamed (I’ll cite your paper if you cite mine).

It was all a little redolent of ‘paradigm maintenance’, a term coined by Robert Wade, who was in the room. Robert decried ‘the emphasis on utilitarian knowledge to strengthen existing power structures. Instead of critiquing the status quo, we have to help them do their jobs better!’

This got me thinking about Matt Andrews’ view of institutional reform, which he sees as passing through an initial stage of ‘deinstitutionalization’, in which critics should ‘encourage the growing discussion on the problems of the current model’. That may be a vital contribution of research, but which government official is going to credit your broadsides and ridicule as the reason for a change in policy? The net result could be to tilt the balance towards impact that is ‘chummy, immediate and easily attributable.’

But despite the lengthy arguments on impact, it was stressed that the priorities remain a) students and b) quality research – impact is third (reassuring, I think). And if it encourages people who are ‘naturally bookish’ to get out and spread their knowledge, that has to be a good thing, right?

Overall, there were a lot of similarities with the results/value for money debates in aid land, except that in aid there is no equivalent of ‘pure research’, and here the counterarguments are backed up by scary levels of erudition – Foucault et al. Glad I don’t have to persuade them.

All in all, a reassuring day – looking forward to January.

September 26, 2014
 / 
Duncan Green
 / 

Comments