Rethinking monitoring, evaluation and learning in complex systems

June 8, 2022

     By Duncan Green     

Two interesting recent posts on Adaptive Management, complexity etc, which the authors have kindly allowed me to repost here. First up is Søren Vester Haldrup, from UNDP’s Strategic Innovation Unit, wrestling with the issue of measurement and learning. Original post here. Tomorrow Tom Aston provides a great overview of where we’ve got to on adaptive management.

Two years ago, we launched a series of strategic investments focused on helping UNDP and our partners figure out how to better understand and tackle complex systems challenges. We did this with generous support from the Danish Ministry of Foreign Affairs.

We call these investments Deep Demonstrations, as they are meant to show (to both UNDP and our partners) that it is possible to do development differently when facing complex problems and high levels of uncertainty. In these Demonstrations we have supported colleagues in countries across the world to apply systems thinking and portfolio approaches to challenges such as the green transition and the future of work.

In this process, we have learned that using systems and portfolio approaches requires very different ways of doing monitoring, evaluation, and results measurement. This is rooted in the realization that we cannot hope to transform complex systems through business-as-usual approaches founded on linear project-based modes of planning, management, and evaluation. UNDP’s new Strategic Plan emphasizes this and also calls for a shift in how we do M&E.

We therefore need to develop new ways of doing monitoring and evaluation (M&E) that are coherent with the complex nature of the challenges facing the world today. When it comes to M&E, I think at least four issues are worth noting:

  • Need to learn and adapt: Because we don’t know up front how to best help solve complex problems such as transitioning our economy to a circular model, we need to continuously learn and adapt what we do based on learning.
  • Adopt longer time-horizons: We need to better deal with the fact that it takes a long time for substantive change (higher-level results) to materialize, and that we do not necessarily know up-front what such change will look like. This makes it difficult to know if we are on track and whether we should do anything differently.
  • Capture impact in the aggregate: We cannot evaluate individual interventions in isolation because we usually tackle systems challenges through portfolios of interconnected interventions.
  • Focus on contribution over attribution: We should focus on capturing our contribution to bigger change processes rather than seek to directly attribute change to our own work. Coe and Schlangen explain this well: 

In reality, contribution is not singular and additive; instead multiple interacting causes make an effect more likely. Recognizing this, we should shift the lens from the “amount” of contribution a single actor makes to an understanding of the typologies of the different actors and how they combine to contribute to change.

Over the past 12 months I have spoken with many people and organizations across the world who are also trying to tackle or measure complex systems challenges and it turns out that most of them are grappling with these same challenges.

Some are much further ahead than UNDP while others are just beginning their journey.

For instance, the Bill and Melinda Gates Foundation is investing in building an evidence-base around how to document systems change in areas such as food and agriculture and Co-Impact has developed a learning, measurement, and evaluation guidebook for systems change.

Meanwhile, the Cynefin Centre and Climate-KIC offer useful thoughts on developing transformative theories of change for complex systems, while the Small Foundation has developed a framework for measuring and managing impact networks.

Blue Marble Evaluation is rethinking the role and approach of evaluators when it comes to global systems change, and organizations such as UNFPA and the Open Government Partnership are deploying developmental evaluation approaches to help them continuously learn and adapt in the wake of complexity.

Furthermore, the adaptive management community has built a solid evidence base and practice, while a variety of publications such as CEDIL’s Methods Briefs and work by Aston and Colnar discuss complexity appropriate evaluation methods. Lastly, outfits such as the Transformative Innovation Policy Consortium, the Rockwool Foundation’s System Innovation Initiative, and FSG’s Water of Systems Change work offer useful conceptual frameworks for thinking about what to measure when documenting systems change.

Despite encouraging progress within and beyond UNDP, our experience over the past two years has made us painfully aware that our current processes, incentives, and ways of working go against the grain of a stronger focus on systems, learning, and adaptation.

We hear this again and again from colleagues in countries such as Bolivia, Ghana, Palestine, and Serbia who are leading systems and portfolio focused work in our Deep Demonstrations.

For instance, M&E is often seen primarily as an accountability (compliance and reporting) function rather than also being about generating timely and useful learning (Toby Lowe from the Centre for Public Impact discusses this challenge in his piece “Made to measure”).

Similarly, our project management systems and procedures tend to be rigid and lock in programming rather than make it flexible.

Furthermore, structural and organizational incentives in UNDP (and in other organizations) discourage learning and higher-level results measurement in the aggregate (across projects in a UNDP country office), because financing and M&E resourcing is usually project-based (each with a separate donor or source of funding). This tends to lead to activity and output-level reporting within projects rather than a view on how combinations of projects collectively contribute to change (how they add up to more than the sum of the parts).

Join us in the Sandbox

UNDP has set up an “M&E Sandbox” to nurture and learn from innovative M&E efforts that we hope can help address these challenges.

The Sandbox was originally intended as an internal (corporate) space for experimentation to support M&E innovations already emerging across UNDP. However, we decided to progressively open-up the space for others to join as we experienced the strong appetite among partners for this type of community.

In the M&E Sandbox we collectively explore approaches that privilege continuous learning and adaptation, see learning as a result in itself, take a systems view, and measure progress and aggregate results across a portfolio.

The Sandbox is a structured process of exploration where UNDP teams and variety of partners can get support for, collaboratively explore and design, and test approaches that are more suitable for a portfolio and systems programming logic. Under this initiative we will explore questions such as:

  1. How do we document / measure change in complex real-world systems?
  2. How do we measure progress and results in the aggregate across a portfolio of our own interventions or across what many different actors are doing? How do we know if change is really occurring and whether we are impacting the intended beneficiaries?
  3. What capabilities, tools and approaches are useful for this type of work? What counts as good (enough) evidence and data?
  4. How do we design and implement M&E frameworks and practices that allow us and others to continuously learn from, adapt, and accelerate our efforts to transform complex real-world systems? How do we track the frequency whereby our portfolios adapt and (re)generate interventions?
  5. How do we design and implement accountability frameworks, metrics or practices that allow for emergence, flexibility, and adaptation, and that shifts focus from individual interventions as the primary unit for accountability towards portfolios? How do we ensure these recognize learning as a result in itself?

This global initiative is open for everyone interested in pushing the boundaries on M&E and we are actively looking for allies, collaborators, and partners.

Organizations, groups, and individuals are free to join the initiative with varying degrees of involvement depending on their needs, bandwidth, and interests.

A range of organizations have already expressed interest in being part of this initiative, including UN agencies, bilateral donor agencies, philanthropic foundations, as well as civil society and private sector outfits.

In the coming months we will launch a series of events to kick-off this effort. Please reach out to me ( if you would like to learn more or if you are interested in getting involved!

June 8, 2022
Duncan Green


  1. It’s always heartening to see from the literature you share how many people are realising (once again) that simple linear approaches do not meet the complexity of life in the way that an adaptive learning/innovating approach may. You know as well as I do that a lot of this is (good) old hat, although dressed in evolving language. It seems like important wheels do need to be reinvented by new generations of practitioners, so that’s OK. However, I find that how to actually do “adaptive management” through rigorous and disciplined approaches to team-based practice learning (e.g. how to actually facilitate action learn/research, tell/write stories of practice, conduct deep learning dialogues, create learning cultures etc.) to be almost completely absent from the same literature, to the extent as not even being named, and I wonder whether these are somewhere else or if people just take them for granted as easy enough to do. This resonates with my general experience of the development sector, that there is no shortage of innovative policies, strategies and tools but not enough understanding of change practice itself, of the real work of facilitating in processes of emergence. If we are going to let go of simplistic project approaches then we have to invest in this.

  2. It is worthwhile to get these occasional reminders about the need to “develop new ways”; yet it is frustrating that all of such advice has been floating in the realm of development for decades, usually from seasoned field practitioners. Academic research in collaboration with field practitioners has been reasonably good; but a stumbling block I have found has been with three sources:
    1) Government bureaucrats with no field experience overseeing donor projects;
    2) Executing agencies which are for-profit firms having high overheads and little desire or patience for admitting obviously untenable efforts, let alone aborting unsuccessful projects;
    3) A persistent disconnect between the “ideal” and the necessary funding to achieve that “ideal” result and impact.

  3. All these new ways of looking at problems; solving and learning from them is welcome. But what does it mean for the people whose life it impacts. Probably very little. How they have viewed their problems and their solutions have not changed much since all the effort to change them through development project began. But our ways of reaching them keep on undergoing bewildering amount of changes.

  4. Thanks very much for such an insightful article. I do concur with your assertion that M&E has become nothing more than compliance & monitoring especially in government. Change is needed & article like this will certainly help.

  5. Very much agree. Adopting the mindset means adopting a logic that is still counter to most of the principal agent models underpinning results based management approaches. We can’t just ‘add adapt and stir’, it requires very conscious efforts to invest resources (time and money) in shifting traditional MEL power relationships.

Leave a Reply