A second instalment in Matthew Lockwood’s series of valedictory boat-rocking blogs (his first was on fossil fuel subsidies) as he leaves the IDS Climate Change team for a new role in the UK energy sector. This time, he asks why the results agenda often stops short of being applied to the big picture stuff like the MDGs. One of the interesting things about having come back to the international development field after some years away is the greatly increased emphasis on results, across all areas of activity, including not only projects and programmes, but also policy making, research, and advocacy. Many people and organisations are interested in the results agenda, including the big foundations such as Gates, influential bloggers like Owen Barder, my boss Lawrence Haddad, and DFID’s Secretary of State, Andrew Mitchell. In his first big speech in office, in Washington in June 2010, Mitchell said “we’re also fundamentally redesigning our aid programmes so that they build in rigorous evaluation processes from day one.” Like many others, I think aspects of the results agenda are important, reasonable and politically wise, although there are also some interesting critiques of the approach. But I also think that, if you really take it seriously, it throws up some challenges and dilemmas. For me, this is clearest in the case of development’s big frameworks and policy directions. One prime example is the UN Millennium Development Goals (MDGs) and their proposed replacement with more development goals after 2015. As most readers will know, the MDGs are a set of human development goals, with subsidiary targets and indicators, formally adopted by the UN in 2000. There is pretty broad agreement that progress towards meeting the MDGs is partial and uneven – some of the goals have been met or look very likely to be met, in some countries, while other goals (such as the target reduction in maternal mortality) may not. Asia, especially East Asia, has done better than Sub-Saharan Africa. However, applying the results agenda to the MDGs is not simply a matter of asking whether the goals will be met. Rather, it is about asking whether the goals have been met as the result of the MDGs having been adopted. The purpose of having high level goals, including any that come after the current MDGs, is to create political will, the mobilisation of resources, policy change and delivery, all of which should bring about a positive change relative to what would have happened in their absence. Many in the aid world would say that, of course, the MDGs have had a major impact, and that it is absurd to even raise the question. However, a rigorous assessment of the evidence suggests that it is actually quite hard to make a strong case. First, the evidence that the MDGs may have made a difference is, at best, mixed. The most comprehensive and rigorous independent assessment is by Andy Sumner and Charles Kenny for the Center for Global Development. They look for significant differences in outcomes and impacts before and after 2000, when the MDGs were adopted. The clearest effects were on aid levels (which are not an ultimate impact but an intermediate outcome). Compared with the previous decade, official aid increased in the post-2000 period, but not as a proportion of rich country GDP. More aid went to the poorest countries, including to Africa. There was a small shift in the share of aid going to the social sectors, on which the MDGs tend to focus, and this happened soon after 2000. There is plenty of evidence of the influence of the MDGs on policy discourse, if this is measured by mention of the goals or their presence in donor policy documents, PRSPs and developing country government goals. However, the effects on actual policy change are less clear. Sumner and Kenny find it “hard to detect a trend” in low income country government spending on health and education. They also find no trend in the quality of developing country policy making, as measured by the World Bank’s Country Policy and Institutional Assessment ratings. On the actual impact indicators themselves – such as income poverty, malnutrition and mortality rates, educational enrolment etc – Sumner and Kenny’s most relevant assessment is whether progress was faster pre- or post-MDGs, and whether progress post-MDGs has been faster than what would have been expected based on past trends. Again, results are inconclusive. The data “suggest that in no case is there an obvious sign of a significant break towards faster progress since 2000. Nonetheless there has been somewhat faster global progress on income, primary completion rates, child and maternal mortality over the post-Declaration period”. A study by Fukuda-Parr and Greenstein of country level data gives a similarly mixed picture. The comparison with predicted rates of progress based on historical analysis implies slightly better than expected outcomes post-MDGs on primary education and gender equality in education, but worse on maternal mortality. Second, there is the problem of attribution. As Sumner and Kenny put it, “even ignoring the very limited evidence of faster progress since 2000 in the average (unweighted) developing country, it is a considerable step from ‘more rapid progress’ to ‘the MDGs caused more rapid progress’”. In other words, bilateral aid may have increased somewhat, some indicators have improved, but how do we know that these changes are due to the MDGs, and not to some other factor? It is not possible to know what would have happened in their absence. This is not a case of running randomised controlled trials across a number of interventions. And as Richard Manning points out, it is hard to separate out the potential effects of the MDGs from the environment that produced them. In some areas, such as vaccination or primary education enrolment in sub-Saharan Africa, the links between the MDGs, the mobilisation [caption id="attachment_11501" align="alignright" width="300" caption=" "][/caption] and focusing of additional aid, and subsequent impacts seem convincingly close. But in others, the links seem less plausible, especially where there are also good alternative candidates that may explain changes in indicators better than the effect of MDGs. Poverty reduction in Asia, for example, is more likely to have been driven by the extraordinary period of sustained economic growth in China, than by a set of UN targets. It is also plausible that China’s growth will have pulled along a number of countries in its wake, including commodity exporters in Africa. The rapid reduction of poverty in Brazil is due in part to the development of social safety nets such as the Bolsa Familia. When I recently asked Romulo Paes de Sousa, Brazil’s former Deputy Minister for Social Development, and closely involved in the design of the Bolsa, whether it was the result of the MDGs, he dismissed this immediately, saying it was the outcome of a domestic debate that emerged from the minimum wage. Yet despite the lack of clear, strong evidence of the impact of the Goals, and the difficulties of attribution, the MDGs are routinely hailed as a success. Most importantly, this success is asserted in the context of discussion about a new set of post-2015 development goals. When it was announced that David Cameron would be co-chair of the UN High Level Panel on post-2015 goals, Andrew Mitchell hailed the “huge progress that has been made through the Millennium Development Goals” and “the successes of the current goals”. When challenged with the point that attribution is often difficult in cases such as these, and that you can’t compare counterfactuals, many proponents of the results agenda recognise the problem. However, their argument is that, in such circumstances, it is the duty of those proposing any particular approach to be explicit about their “theory of change” – that is, be explicit about the full chain of causal linkages you think is going to run from your intervention (here adopting international goals) and the impacts you hope for. Identify your assumptions. Assess the evidence for and against those assumptions, and weigh up the risks. If done properly, this wouldn’t be just about ticking a box. The point of such an analysis should be to help understand how to make such goals more effective. It should look at why some goals were easier to meet than others (gender equity on education as opposed to access to clean water or reductions in maternal mortality) and in some countries than in others. It should look in a systematic and rigorous way in how the goals were used (or not used) and where there is evidence that they failed to lead to a result, explore alternative, potentially more effective “pathways to impact”. The point here is not that the MDGs are somehow a bad thing, or that there should not be a new set of goals. In any case, it is not seriously in question that there will be further goals post-2015, of some form. Too much political capital has been invested in them for this to be the case, regardless of the ambiguity of the evidence base. The results revolution will not change the reality that some policies and initiatives are often inevitably driven by more than evidence, and that politics plays a major role. Nor am I advocating a view that we should not try to measure impact or wrestle with the problem of attribution. What I am saying is that I think the example shows that really, really applying the agenda of results and evidence-based policy consistently and rigorously can be more difficult than the current discourse acknowledges. Matthew Lockwood is a Research Fellow at the Institute of Development Studies at the University of Sussex. From October 2012 he starts work on a four year project on innovation and governance in the UK energy sector. ]]>