Can impact diaries help us analyse our impact when working in complex environments?

July 8, 2013

     By Duncan Green     

One of the problems about working in a complex system is that not only do you never know what is going to happen, but you aren’t sure whatcomplexity sign developments, information, feedback etc will turn out (with hindsight) to be important. In these results-obsessed times, what does that mean for monitoring and evaluation?

One answer is to keep what I call an ‘impact diary’, where you dump any relevant info as it passes across your screen/life, so that you can reconstruct a plausible story of impact at a later date.

What might such a diary (more like a folder) contain?

  • Business cards (one former head of our Pan Africa programme used number of business cards collected by staff as a performance indicator, increasing the pressure on them to get out and network)
  • Feedback (emails, press coverage, conversations)
  • Feelings at the time – apprehensions, enthusiasms
  • Critical Junctures: big decisions, why, when, what
  • List of things that people say that need more thought (so they don’t get lost)

I ran this past some of our monitoring and evaluation wallahs. Claire Hutchings added some general guidelines on what to record, focussing on how to make it intelligible to people like her who come in later and have to evaluate a campaign that can often be heavy on jargon and wonky detail:

  • describe who changed, what changed in their behaviour, relationships, activities or actions, when, and where – explain why the outcome is important. The challenge is to contextualise the outcome so that a reader who does not have country and topical expertise will be able to appreciate why this change in a social actor is significant. How do you know that the outcome was a result— partially or totally, directly or indirectly, intentionally or not – of our activities?
  • Also important for watershed moments to be recorded, along with some contextual information (what are the key factors, key actors etc.) that will help to explain it in the future.

Our campaigns MELista, Kimberly Bowman, focused on the how, rather than the what. If you set up an elaborate system that requires people to enter loads of data, you can be pretty sure it won’t happen. So the key is to redirect existing flows of information, chatter and analysis into a ‘bucket’ that evaluators or campaigners can analyse later, when things aren’t so frenetic. The simplest is to add a bucket email address to all the existing group emails, and do something similar for twitter and blogs.

Kimberly also suggested some new software (at which point, this post crosses my IT frontier and moves off into the outer darkness):

‘For the World Bank land freeze campaign (Oct-April this year), we used ‘Evernote‘ to collect all of the team’s internal emails. Evernote has a nice webclipping tool, apps for phones, group data-sharing options, tagging, and optical character recognition for searching PDFs and Word docs, so it seemed like an interesting thing to test.’

Any other suggestions, either on the what or the how?

correlation v causation cartoon

July 8, 2013
Duncan Green