Payment by Results: what is the Evidence from the First Decade?

October 30, 2018

     By Duncan Green     

Paul Clist, who actually seems to enjoy reading project documents, introduces his new paper on Payment by Results, a popular new aid mechanism (see also his 2016 post on the same topic).

In a new paper, I argue that despite its public support for the idea, DFID hasn’t really tried Payment by Results, at least not in the way its proponents would define it. Of course, DFID does have several projects that look a lot like PbR. They have even more where PbR is something of an added extra. But amongst the available evidence, I couldn’t find a single full PbR project.

So it might not be surprising that my overall finding is that there’s “no evidence that PbR leads to fundamentally more innovation or autonomy, with the overall range of success and failure broadly similar to other aid projects”. Part of the battle over PbR in the coming years will be about whose fault this is. Is the problem with donors, for failing to implement a good idea? Or is the problem the idea itself? Regardless, we’ve already learned some useful lessons along the way.

Lots of Views, Little Evidence

‘Payment by Results’ is a phrase which provokes some pretty strong reactions on this blog. For some (e.g. the Center for Global Development), Payment by Results (PbR) is a fantastic idea with the potential to transform aid. Freed from the need to track inputs, aid recipients have the freedom to innovate. Donors will reward those that deliver results, making stretched aid budgets go further. For others (e.g. Robert Chambers), PbR is symptomatic of a perverse shift from participation to petty control. Donors set targets that have little relation to ‘real results’, and PbR merely helps sell aid to a sceptical public.

Stepping back, this debate is in an odd place. The two sides don’t just seem to disagree about PbR, they seem to be talking about completely different ideas. Does PbR mean a reduction in red tape, or is it yet another bureaucratic hurdle? Does it provide freedom or a strait-jacket? Does it deliver concrete results, or a mirage? There are sensible reasons why the two sides are so far apart: there is very little good evidence. With committed and sensible people, these views can only coexist because the evidence cannot currently decide between these views.

Not only is the evidence thin, it’s fragmented. Two people arguing about PbR may not recognise the other’s working definition of PbR. My own view is heavily influenced by the first such project I evaluated: a Results Based Aid project.  The UK government agreed to pay the Rwandan government for any improvements in the numbers of students sitting key exams, for three years. Other people hear PbR and think of contracts with small NGOs, or health workers, or multilateral organisations. Instead of 3 year contracts with 3 rounds of results, they may think of longer term contracts with many rounds of results. PbR means different things to different people.

With thin and fragmented evidence, people are bound to hear more from their own sector. If you’re in the WASH sector you may be very aware of a certain kind of PbR, and blissfully ignorant of an entirely different set of acronyms, practice and results.

Another PbR paper, this time with added evidence

My latest paper (here) tries to help in two ways. One, by discussing new evidence. Two, by being more systematic in the identification of that evidence. With DFID staff’s inputs, I went through all the PbR projects that they fully or partially funded. Of those 20 projects, I was able to identify useful evidence from 8. I included everything that gave sufficient information on what was actually achieved. (If you’re interest in the underlying evidence, see appendix 1 of the accompanying DFID document.)

This meant I excluded some promising projects that only had ‘lessons learnt’ type documents. I also couldn’t include one that was cancelled ‘due to suspicion of fraud’. What’s left is the boring but representative projects that make for bad gossip. There is also relevant evidence that isn’t DFID-funded or wasn’t available at the time, which isn’t included. At each stage, I invited comments on any material I was missing from the relevant DFID staff. This approach means your favourite project might be missing, but hopefully it also means the evidence is more representative.

With these 8 projects in hand, I then dusted off my MAP framework. Every PbR agreement has a Measure, an Agent and a Principal. In other words, an agreement that determines the payment, and the parties that receive and give the money. This theoretical framework allows fairly disparate projects to be compared, using theoretical insights.

What does the evidence say?

The headline is that DFID hasn’t yet done the kinds of PbR contract that proponents envisaged. The idea was to set recipients free to achieve an agreed goal. They would have enough time, space, and incentive to find the best way to achieve this. Regular feedback against the measure would be the dose of reality required. Of all the projects I had access to, I couldn’t identify any project that really matched this. Most often, they were too short term. Sometimes there was little feedback. One thing I hadn’t expected was how often a measure could be too complicated to get the attention of the relevant NGO staff, government minister or health worker.

A lot of the evidence is a kind of negative evidence. Like a cake that’s missing eggs, we see projects that didn’t deliver because they only had 80% of the required ingredients. Unfortunately, you don’t get 80% of the results in this case. If donors want to go for ‘Cash on Delivery’ aid in future (what I call ‘big PbR’), they need all the ingredients.

There is a temptation (which I think proponents have mostly avoided, to their credit) to blame donors for not implementing the idea well. Personally, I think we should be a bit more pragmatic than that: donors are constrained, and PbR is only a good idea if it is practically possible to implement given real constraints. So far, it hasn’t been.

So, is it all bad?

No. In the review, we see evidence of very successful PbR projects. These are all ‘small PbR’, where a larger project had an element involving a PbR contract. Where this targeted the right thing, it seems to have worked. However, knowing whether you’ve targeted the right thing is difficult. Some recipients are incentivised, pay more attention and achieve impressive results.

For example, in a Ugandan Results-Based Financing (RBF) project in the health sector, areas that were paid according to PbR outperformed a control group by 2.5 times. But others are incentivised, pay more attention and miss the target completely. For example, in a health project in the DRC, workers lost a third of their take home pay because their extra effort didn’t translate into extra results.

Will the real PbR please stand up?

There is enough evidence in the paper to provide ammunition for both sceptics and enthusiasts: see the paper if you’re interested.

With ‘small PbR’, we should expect to see more examples along both of these lines – PbR will not fundamentally change the range of project success or failure, rather it will act as an added extra. Sometimes the benefits will outweigh the added costs, and in other cases they won’t. With the growing evidence base, we may learn more specifics so we can increase the likelihood of success (e.g. by identifying good measures), but the upside is relatively limited.

It is of course possible that ‘big PbR’ works, but just hasn’t been tried yet. Whether it is even possible to implement remains to be seen.

Comments