I’m off to Papua New Guinea in a couple of weeks in the role of ‘critical friend’ (more on that weird job description in due course) to a big Aussie-funded aid program (the A$87m Building Community Engagement in PNG Program) run by DT Global (as Cardno is now called). They’ve just published an excellent guidance note on Adaptive Management, written by Jane Lonsdale, with a focus on how to make it practical. It also has the coolest cover I’ve come across in the AM literature.
I would recommend reading the whole paper, which is commendably clear and free of waffle, but for this post, I’ll pick out a couple of highlights. First it highlights that AM is not always the answer:
‘Adaptive management is not applicable in all programs and should not be undertaken lightly. It requires significant strategic and day-to-day management and therefore can be resource intensive’.
There’s some interesting stuff on applying AM in big programs like the one in PNG.
‘Adaptive management has mainly been used as a whole program approach, with varying success, often on medium-sized programs up to around A$25m (US$15.8m). However, it could be feasible to apply adaptive management to one or more component of a large-scale program if a complex, contentious and/or political component requires it, and the program leadership can create sufficient autonomy and flexibility of systems.
A desire for full program highly adaptive management is usually a decision at the design point of a program, which then needs to be followed through during the contract negotiation stage to ensure that program’s technical team works closely with the corporate team to negotiate the flexibility and autonomy required to then deliver the design. Steps towards adaptability can be helpful to program effectiveness even when a fully adaptive approach isn’t appropriate. The intent is to support programs to be clear about their desired adaptive level, and able to articulate and move towards that.
Programs can begin by assessing how far adaptive management is needed and then working through what steps are required to achieve this, in agreement with the client. Many prominent adaptive management examples come from the governance field, which inevitably work with complex political problems. Gender, education, water, and waste management programs have all succeeded in managing adaptively. Good examples within DT Global include the Centre for Good Governance in Myanmar (concluded 2021), Vanuatu Health Program, ASEAN-Australia Counter Trafficking program and Balance of Power in the Pacific.’
The paper identifies four essential elements to Adaptive Management:
Flexibility is our capacity to adjust (for example strategies, plans and resources) in response to contextual change and learning plus the absence of constraints that force programs to stick to predetermined plans. Generally, this approach requires support from our client and delegated authority to manage adaptively.
Responsiveness is about engaging deeply with context consistently throughout the program, proposing and testing what we think could work and ensuring that our plans reflect and are shaped by our understanding of local politics and power.
Purposive learning is about review and reflection, including testing and updating the program rationale and assumptions, which leads to strategy and programming decisions and adjustments as our understanding and influencing position evolves.
Culture is about having a team that feels confident and empowered to work with these approaches, which includes having supportive ways of working in place with the client and trust across key relationships.’
It provides a useful spectrum on how these apply in non-adaptive, minimally adaptive and highly adaptive programs:
It then digs deeper into each of these, with a super-helpful table comparing ‘not adaptive’, ‘minimally adaptive’ and ‘highly adaptive’ approaches across the four elements of Adaptive Management. For reasons of space, here is just one of them – the Purposive Learning bit:
|Not Adaptive||Minimally Adaptive||Highly Adaptive|
|Theory of change||Prescribed through the design process and expected to be fixed throughout the program.||Developed during the design process, usually only updated at the approach level during periodic reviews.||Developed over time, becoming more grounded and sophisticated; assumptions and hypotheses updated systematically during learning reviews. Nested mini TOCs for each component / pilot / activity.|
|Learning approach||Annual or less frequent learning reviews; may be little consequence on approach / activities. Primarily proving success. Led by externally contracted, objective, outside reviewers.||Conduct multi-stakeholder learning events once or twice a year and adjust work plans based on this. May be externally or internally led and facilitated.||Sophisticated 3 or 6 monthly reviews of project TOC, nested TOCs and decisions on what to Drop, Adapt, Keep, Improve (DAKI). Emphasis on experimentation, learning as we go, requiring acceptance of pilot / activity failure. Internally led.|
|Monitoring||Guided by reporting on results-based MEL framework / against original design.||Guided by reporting on results-based MEL framework with intentional space to capture deviations.||Guided by need to make frequent DAKI decisions. Measures incremental change. Measuring multiple pilot / pathway changes simultaneously.|
|Evaluation||Guided by logframe outcomes and indicators to demonstrate attribution / contribution. External evaluation, generally only at mid-point and end.||Guided by program logic with some space to reflect on deviations.||Identifies system level outcomes and focuses on contribution analysis.|
Cracking stuff, and I will of course be reporting back from PNG in due course.