It’s the (political) economy stu…

Thanks to the colleague who sent me the link to this interesting blog-post from the Center for Global Development: ‘an object lesson in the gritty difficulties of translating evidence into policy’. In summary: the fact that worming kids had been rigorously and gold-standardly (yes – an RCT!) proved to be really useful is, apparently and unfortunately, not enough for it to avoid getting squeezed by Kenya’s failure to satisfy DFID that it can spend its education money effectively (super-sumamry: evidence crushed in the political-economic machine).

So surprisingly enough, whether an intervention works or not, whether it has evidence behind it or not, and most importantly whether it is scalable or not – comes down to boring old political economy (or governance – or sometimes just management). Put another way – all the evidence in the world will only get you so far if no-one in a position to make decisions is listening or interested; or if the delivery chain for the intervention is broken: rigorously analysed pilot interventions carried out at micro-scale in development ‘labs’ by well financed, managed and motivated (I)NGOs are insufficient to know whether those interventions will fly in the real world of DPs, Governments and ‘smoke filled room’ decision making.

Let’s be clear. Evidence is important. But if impact is what you are after, then evidence is only as good as the political economy in which it operates. Or, put another way, evidence will only lead to impact where political-economy/governance/management (call it what you will) are good enough to demand it, understand it, and act on it. Indeed I believe that lack of demand for evidence is in itself a symptom of poor governance.

Much of the work that I am involved in with IRC revolves around trying to bring about sector change that leads to an environment where evidence is demanded and appreciated.

Thinking about this, it strikes me that the size and relative importance of what we could call it the ‘RCT Space’ (the set of conditions where RCTs are an appropriate and useful approach to evidence building) in the wider political-economy is governed by a combination of the relative simplicity of the approach being tested, and the ‘strength’ of the political-economy. Which is what this little diagram attempts to show: the conditions under which RCTs are likely to be (cost) effective, and where less so.

RCT Space

What sort of intervention would inhabit the top left hand quadrant – the heart of RCT space? Vaccines are and excellent example, especially vaccines that only have to be given once. A one shot intervention that impacts directly on the individual is much less open to the vagaries of weak or nonexistent government than any sort of service. Vaccines can be usefully delivered by determined non-governmental actors even where governance is lousy. So, once we know that a vaccine works (using an RCT) it is possible to develop mechanisms to deliver it, even in the middle of a war! And you don’t even need to get full coverage to have the desired effect.

Now let’s look at the other end of the scale – the bottom right hand ‘other appraoches needed’ space.  Trying to pilot a ‘complex’ intervention in a weak political economy. Think about domestic water supply – in the rapidly expanding fringes of large cities. A domestic water supply, to be effective, has to work every day – forever. It’s not like a vaccine where if you can catch 70% of the kids once in their lifetime it is more or less done. It is every single kid, for every day of their life. It has to reach the poorest of the poor – somehow. It relies on relatively complex treatment and delivery systems incorporating multiple actors and the need for management and management systems. It requires regulation and – in the end – strong institutions such as courts and auditors if lines of accountability are to be maintained. In other words, delivering water is about service delivery (which delivering vaccines is not – or only partially). And if there’s one thing weak political economies don’t do – its deliver services effectively. In a setting like this, your evidence supported intervention – be it a funky new pump or a community based management model – can be backed by all the proof in the world and still never get out of the starting blocks. So, the bottom right hand quadrant is out.

Which leaves us with the other two. Starting with the top right quadrant – complex interventions in strong states. As data becomes cheaper and increasingly omni-present in the developed world, more and more relatively complex social and technical interventions become amenable to testing using RCTs (and other data heavy approaches). However, as this recent article in the economist points out (‘the costly war on cancer‘), once the stakes get sufficiently high (and it doesn’t get higher than death from cancer) efforts at evidence based decision making in medicine tend to come under increasing stress. Or indeed, look at Germany’s decision to dump nuclear power – despite a clear evidence base that it is the safest and cleanest technology realistically available to us (see how ‘evidence’ can become less convincing if you really don’t agree in your gut?!).

Finally, there is the bottom left quadrant – relatively simple interventions in relatively weak political-economies more typical of many developing country contexts. This is the quadrant where we’d locate the example that I started this blog with – a relatively simple intervention being implemented in one of Africa’s wealthier states. Yet, in the end, a very simple, very cheap and very effective approach became hostage to political-economic fortune. Read the blog for the gory details.

Where does all this leave us? This and a previous blog have both taken pot shots at evidence based decision making generally and RCTs in particular. Primarily because the glee with which a section of the development world is embracing them brings out my inner sceptic. Partially because RCTs are an inherently reductionist approach, and because I’m convinced that most of the really difficult problems of development resist reduction to the sort of simple binary choices that RCTs require.

In the final analysis I’ve spent a lot of my professional life trying to find ways to create an environment in which evidence would be demanded, welcomed and used. So I do believe in evidence based approaches! And I even believe that RCTs are a useful part of the evidential armoury. It’s just that they are not a gold standard, and much less ‘the hottest thing in the fight against poverty’ (more like the hotest thing in development academia).

The danger, and the reason that it is worth pushing back against the RCT tsunami is that if we are not careful absence of evidence is traduced to being evidence of absence. Leading to a situation where, just because an intervention is complex and ‘unproven’ (at the level of an RCT) it will be discarded as not worth trying – let alone scaling. Over reliance on RCT’s will lead to us tending to seek out only those interventions that work in the upper quadrant – or that are simple to the point of being trivial. After, all, did we really need to link de-worming to education outcomes in an RCT to know that worms are a) bad for us and b) cheap and easy to treat?

If we want to tackle the truly difficult problems of development – the wicked problems that lock whole countries into a cycles of poor governance and flatlined growth, we need additional tools and approaches. Tools like action research or the Learning Alliances that IRC champions in much of its work. Tools that are designed around a policy of full engagement with government – however weak – and that try to spark interest in the collection and use of information – and yes – evidence. In the end it can’t be either or. The aim has to be to contribute to the building of societies in which evidence is demanded and valued within the wider political economy. Or, as the blog that triggered this puts it “rigorous evaluation .. and fixing the weak links in the governance chain .. are not substitutes; they’re complements. In some cases, such as the nationwide randomized trial of Mexico’s Progresa program, rigorous evaluation can be conducted at full scale, exposing the project to all the relevant institutional and political constraints from the beginning. This is ideal when it’s feasible. But in other cases it makes sense to experiment at a small scale, and allow for more trial and error. In those cases, scaling up successful pilots requires complementary intellectual efforts toward solving or circumventing the political economy failures that led to the suspension of deworming”. Indeed!

Advertisement
Comments
5 Responses to “It’s the (political) economy stu…”
  1. Lee says:

    Another way of putting it is like this,

    Impact = “A good idea/intervention” x “capacity/interest in implementing good ideas”

    We need both. Benevolent rulers aren’t much good if they don’t know what to do.

  2. elisewach says:

    Patrick, I think this is a great post – I know it is a bit ‘old’ for blogs but the points you raise are obviously still relevant.
    To add even more fuel to the fire, you may want to see this article in New Scientist (Is medical science built on shaky foundations – http://www.newscientist.com/article/mg21528826.000-is-medical-science-built-on-shaky-foundations.html), and Rick Davies’ related blog post (http://mande.co.uk/2012/uncategorized/is-medical-science-built-on-shaky-foundations/). It seems that deworming might not actually be useful afterall, and there are a lot of questions we should be asking about science and research…

Trackbacks
Check out what others are saying...
  1. […] colleague of ours, Patrick Moriarty, has started blogging on his own blog and among others he wrote this inspired post about the political economy of development. This post looks particularly at the much hyped […]

  2. […] What might such a portfolio-based approach look like? There are a number of useful approaches from academia, civil society and business strategy that can help here. These include Brenda Zimmerman’s simple-complicated-complex distinction, the Cynefin framework of Cognitive Edge, work done by Alnoor Ebrahim at Harvard University, work done by Eliot Stern on relevance of different approaches to impact assessment and finally a recent model put forward by Patrick Moriarty of IRC. […]

  3. […] as that (encapsulated elsewhere in Patrick Moriarty’s comments on the use of RCTs as evidence: “It’s the (political) economy, stupid”). They go on to explain that there must be some form of decision-making arena with enough […]



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: