What is different about how INGOs do Adaptive Management/Doing Development Differently?

February 8, 2019

     By Duncan Green     

Earlier this week I chaired a fascinating discussion on the findings of a new paper on an adaptive management (AM) experiment by Christian Aid Ireland (CAI). The paper really adds to our knowledge of AM/Doing Development Differently:

  • It looks at the work of an INGO, when most formally identified AM practice and research involves big bilaterals like DFID
  • Linked to that, the focus is on how AM interacts with the partnership model that lies at the heart of CAI’s work – especially partnership with local NGOs. Most AM is either working directly with the state, or has a more top-down relationship with NGOs.
  • The CAI programme targets 7 conflict and violence affected states – most AM work has so far taken place in relatively stable settings
  • It is one of the first attempts to apply AM to an organizationally complicated programme – most previous AM projects have been single project, single management structure. This involves several countries, is multi-tiered, and covers a complex set of relationships and topics – governance, human rights, peace-building, gender equality.

According to Gráinne Kilcullen of CAI, 4 innovations characterize the programme:

  1. A Theory of Change was developed with each partner in each country, and at country programme level. The ToC are built on the answers to an admirably simple set of questions: what change do you want to see?; what are your assumptions about how that change will happen?; what initial strategies will you try out?; how will you know whether they have worked?
  2. Strategy Testing, taken from the work of The Asia Foundation: this is a process of reflecting on and adapting the theory of change, on an annual or 6 monthly basis. Are these assumptions still true, are our strategies helping? CAI support this process in country, along with ‘critical friends’ from outside the programme. The discussion also helps CAI feed the results machine.
  3. To bring in evidence, the programme uses Outcome Harvesting to capture results as they occur
  4. There is a focus on bringing in community perspectives, where discussion with programme participants ask about what changes are happening, why and what could be improved?

Lessons from the previous five year programme under Irish Aid told the story that partners found target setting too restrictive given the complex issues and contexts. So CAI asked Irish Aid to try something different in the next programme phase (2017-2021). Irish Aid agreed, but wanted to continue with a traditional matrix of targets and results framework. The compromise was that the CAI core team in Dublin took on the reporting burden, drawing on information from the outcome harvesting and strategy testing processes, and so tried to free partners from the linearity and pressures of traditional aid.

The discussion on the role of partners was one of the most interesting aspects. In some places, eg according to Christian Aid’s Alicia Malouf, in Israel and the Occupied Palestinian Territory, their reaction was ‘‘this is exactly what we need. Before it was all results and numbers – but this allows us to capture how we respond to eg unpredicted actions by the Israeli government.’ Alicia added, ‘Partners like outcome harvesting. It allows them to step back, analyse their contribution and communicate to other donors as well as us. When we are not sure if we contributed to some changes, outcome harvesting allows us to say we may have helped – it produces greater richness and understanding of partners and the challenges they face. Partners are slowly realizing that we are funding them to sit, think, reflect, not just report on targets. It is an opportunity that few others encourage.’ Some are asking for guidance to do strategy testing for themselves and with their own partner organizations.

In other cases, partners struggle: they may have started out as instinctive ‘dancers with the system’, but decades of traditional aid have to some extent rewired them to work and think linearly, especially at field level. If they are single issue NGOs, it is very hard for them to say “we specialise in budget monitoring, but that doesn’t seem to be working, so what do we do instead?” In addition, partners typically rely on multiple donors, and if the rest are all doing traditional aid, it is hard to be truly adaptive in just one part of your work.

Three final thoughts from me:

  1. Is AM designed for a high trust or low trust environment? To empower good, dynamic partners by giving them the flexibility they need, or to introduce accountability and control mechanisms for when recipients turn out to be a bit rubbish? In practice, there will always be a mix of both kinds, and AM design needs to combine the two approaches, but people tend to argue on the basis of one or the other. Not helpful.
  2. If programmes like these really do better than traditional linear programmes, then the pressure for results and value for money should be our friend. But people experience the opposite – why is that? One speaker thought it was because in DFID’s 4E model for VFM, economy and efficiency in practice take precedence over the other two Es – effectiveness and equity. Chuck in a donor preference for predictability and ‘no surprises’ and AM is likely to struggle. But I still think the answer is to try to win the battle on results and VFM, not reject them.
  3. Once again, the question of delivery models came up. Aid is starting to resemble a supermarket supply chain of connected players: donors, fund managers, implementers, NGOs, civil society organizations and communities. If AM is to prosper we need to understand the drivers and blockers in that chain – how each of them reacts to it, how it interacts with their incentives.

Great stuff – please read the report if you can.

Comments