Theories of change = logframes on steroids? A discussion with DFID

May 14, 2012

     By Duncan Green     

‘Theories of Change is just the latest attempt to shine a light on what lies behind, what makes everything work or fail. We constantly reach for new tools, but we keep alighting on small islands and losing the big picture.’ Jake Allen, Christian Aid I recently spoke at a half-day DFID seminar discussing a draft paper by Isabel Vogel – ‘Review of the Use of Theories of Change in international development’. The draft is here (keep clicking) – Isabel wants comments by this Friday 18 May, either on the blog, or emailed directly to info[at]isabelvogel.co.uk. She is particularly looking for examples of documented theories of change (ToCs) originating in developing countries (as opposed to donor-funded programmes). The level of interest was impressive – 40 DFIDistas in the room, plus 7 country teams via videocon and sundry NGOs and consultanty types. My overall impression was that Monitoring, Evaluation and Learning (MEL) is driving the ToCs discussion in DFID, and not always in a good way. So in my allotted 5 minutes, I stressed that ToCs should not become a ‘logframe on steroids’ (a phrased nicked from Alfredo Ortiz) and the importance of power analysis and ToCs as a permanent aspect of the planning cycle – and not 280px-Cynefin_framework_Feb_2011just for programmes but for policy and campaigns work. Plus their usefulness (albeit in different ways) in all 4 quadrants of the Cynefin framework(Simple, Complicated, Complex, Chaotic – see graphic), rather than just in the simple/complicated quadrants preferred by development types. I also said we should throw away those horribly complicated ToC diagrams once we’ve finished them (lest they terrify those that follow). The discussion confirmed these concerns. Lots of people (including many of the measurers) are fully aware of the risk and want to avoid it, but are struggling against powerful incentive structures that make it happen anyway (principally the results agenda, but also the difficulty of using non-linear ToCs in practice). Hivos, a wonderfully cerebral-but-practical Dutch NGO that has done a lot of thinking on this, talks about a broader range of ‘ToC thinking’ as a useful way to prevent it all being turned into just another toolkit (‘ticking the ToCs box?’). Rick Davies recalled that the logical framework approach was originally a separate exercise to filling in the logframe table, but they collapsed/reduced into the table due to the structure and working practices of the aid business. Might the same fate await ToCs? What of the benefits? In addition to those discussed in previous posts, Joanna Monaghan of Comic Relief (a funder), sees ToCs as making explicit the hypotheses underlying funding decisions – ‘the rules of thumb we all carry around in our heads’. That allows partners to challenge them, if they think the funder has (gasp!) got it wrong. People also saw ToCs as making people look at the evidence and identify what is known/unknown (that rather alarmed me – what were they doing before?), but also helping programmes adapt more quickly as new evidence emerges. From the MEL end, an explicit ToC also allows a discussion with beneficiaries on what indicators to measure progress against (rather than the funder just imposing them from outside). ToC challenges When it came to the challenges of implementing ToCs, the big headache is how to balance donor accountability (reflected in the pressure for measurement and results, and holding partners to account against pre-agreed plans), and the ability to use ToCs intelligently to learn and adapt to changing environments. ToCs are about people engaging intelligently with the complexity and nuance of context and process. But how do you rigorously assess the quality of people’s thought? The development community usually focuses on process and outcomes, whereas ToCs may demand then a miracle happenssomething more like academic assessment on how deeply people are thinking about things. ‘Accountability has to be about trying hard enough. We never ask questions about critical thinking, only about delivery on a set of results which 5 years ago we thought we would be able to achieve.’ Stand by for quasi-professorial marks for project proposals (‘beta minus, must try harder’). The more practical types worried over how you can balance constantly revisiting/revising a ToC with the need to get on and actually, you know, do something. One answer: pre-agree circuit breaker reviews at e.g. one year, two years into the project, when everyone knows the ToC is up for grabs; another – test (and fund) a series of ToCs in a pilot stage before deciding on a final ToC – a bit like the DFID-funded research programme consortia, which include (and finance) an ‘inception phase’ during which the recipient is allowed to test and finalise their research plans for the subsequent years. Perhaps there also needs to be a clear process for designated people to have access to a ‘red button’ change of direction in response to major contextual shifts that require a rapid revision of the plan (‘Mugabe dies’). If failure is indeed a source of ideas etc, we need to create a safe environment to recognize, communicate and learn from it. That requires a shift in culture and incentives – e.g. circuit breaker reviews must have a convincing discussion on failure and what we’ve learned – if a project can’t demonstrate failure as well as learn from it, it probably isn’t trying hard enough. Another plea from the practical peeps – can we separate out communities of practice from communities of theory, otherwise the practitioners are cowed into silence by the theory wallahs sounding off (who could they be thinking of?) One final random thought: Is this (i.e. funding projects with plural ToCs, greater appetite for risk of failure etc) a suitable role for philanthropic foundations who are more able to take risks on failure than publicly funded donors?]]>

May 14, 2012
 / 
Duncan Green
 / 

Comments