Development Economist Paul Clist discusses some of the ideas from his new paper (Link to paywalled article version, link to free draft version)
Payment by Results (PbR) is a fairly new idea in aid, where a donor decides how much money to disburse on the basis of how much a recipient has achieved against a target. For example, a donor could pay an NGO for how many wells it installs, or a government on the basis of their vaccination rate. I’ve got a new paper that puts some meat on the bones of the standard development/social science points that this isn’t a silver bullet and context matters, so read it if you wish to know when we should expect PbR agreements to work. To give you a flavour, it looks at three bits of a PbR agreement: the measure, the agent (i.e. recipient) and the principal (i.e. donor). You may not like these terms, but it does mean we a nice acronym -MAP.
Rather than just summarize the paper, here are two themes which are new to my thinking on PbR.
PbR will ‘honestly’ mislead
When people first hear about PbR, a common concern is fraud: that the recipient will simply lie or cheat the measure. I think these concerns tend to be well understood, and so aren’t much of a problem. What I hadn’t fully appreciated was the capacity of PbR to ‘honestly’ mislead, in other words when the measures will report success when there is none. The paper gives three examples. In one case, national governments received payments for improvements in vaccination rates that didn’t actually happen, because of subtle incentives in data reporting. In another, PbR seems to have successfully changed health outcomes in Indonesia. However, a return visit shows PbR simply changed when improvements occurred, with no discernible impact in the long run. In another case, PbR was used to incentivise school feeding programs in China as a way to fight anaemia. While it had the desired effect on school feeding programs, kids in these schools actually did worse in their exams.
In each of these cases the goal of the donor was not met, but PbR reported success and the donor disbursed aid. I don’t think there was a deliberate and dishonest attempt to mislead donors, but rather a more complicated and subtle process. The obvious application for those wishing to do PbR is that the quality of the measure is the biggest determinant of whether PbR will work.
There are more subtle points here too. I called the new paper fool’s gold because I’m worried that not everything that looks like evidence of PbR’s success is genuine. The ‘success’ in the examples above was false, but we only know this because there was other data to investigate. Normally, that won’t be the case. Morten Jerven calls this ‘policy-based evidence’, because PbR reduces the quality of the data, in a way that favours the policy in question. If we had lots of lovely data that would be fine, but we don’t. If the data we do have becomes less accurate because they are used as indicators, PbR will not only look more useful than it is, it will reduce our ability to learn about all sorts of things.
Complexity and Markets: Time to leave those models behind
One of the intellectual attractions to PbR is that it appears to offer sensible solutions in a complex world, where donors let recipients discover the best way of doing things and simply focus on the results. Another mental model that seems to attract people to PbR is that it looks a bit like a market. If you’ve some grasp of economics, PbR looks a lot like paying a recipient to produce development outcomes.
I don’t think either of these mental models is particularly helpful. On complexity, we can think of a case where a single indicator (e.g. the number of soap flakes, to use Tim Harford’s example from Adapt) allows someone to discover the best way of operating (through trying random variations of different nozzles). In this example we have a complex process, lots of feedback loops and iteration towards an optimal solution. While this is an interesting mental model, most PbR projects are a) much more complex than this and b) have a fraction of the feedback. In their book on PbR, long time advocates the Center for Global Development set out a coherent model of PbR. However, while CGD recommended PbR projects run over a minimum of 5-10 years, this simply isn’t happening. Whether it is even possible, given institutional constraints, remains to be seen. The kinds of things that PbR is trying to disburse on are also incredibly complex, where genuine improvements may take longer than a few years to even start showing signs of working.
The paper also discusses a little the latest research on innovation, which recommends not punishing failure in early stages, but giving handsome rewards over a period of time later in the contract. That is a long way from a three year contract for a complex outcome and limited feedback.
On markets, while a first look sees similarity between markets and PbR, economists tend to be quite sceptical. The paper applies the multitask model to PbR, a model discussed at length by the recent Nobel prize winner Bengt Holmstrom. The main message is simply that a good measure is much harder to find than you’d think, with the above example of Chinese school feeding programs an almost perfect demonstration of the multitask model.
Let’s not hype
My main concern with PbR is not that it will always fail: far from it. Rather, genuine evidence of success will be difficult to distinguish from the kinds of fool’s gold effects discussed above, especially when it is used in unsuitable contexts. If PbR is used in cases where it does more harm than good, this will ultimately lead to a backlash where PbR is used too little.