Sunday, December 04, 2016

Scaling up Randomised Trials in Public Policy

The paper below co-authored by many of the leading figures in the application of randomised trials in public policy is well worth reading.
From Proof of Concept to Scalable Policies: Challenges and Solutions, with an Application 
Abhijit Banerjee, Rukmini Banerji, James Berry, Esther Duflo, Harini Kannan, Shobhini Mukerji, Marc Shotland, and Michael Walton
Abstract
The promise of randomized controlled trials (RCTs) is that evidence gathered through the evaluation of a specific program helps us—possibly after several rounds of fine-tuning and multiple replications in different contexts—to inform policy. However, critics have pointed out that a potential constraint in this agenda is that results from small, NGO-run “proof-of-concept” studies may not apply to policies that can be implemented by governments on a large scale. After discussing the potential issues, this paper describes the journey from the original concept to the design and evaluation of scalable policy. We do so by evaluating a series of strategies that aim to integrate the NGO Pratham’s “Teaching at the Right Level” methodology into elementary schools in India. The methodology consists of re-organizing instruction based on children’s actual learning levels, rather than on a prescribed syllabus, and has previously been shown to be very effective when properly implemented. We present RCT evidence on the designs that failed to produce impacts within the regular schooling system but helped shape subsequent versions of the program. As a result of this process, two versions of the programs were developed that successfully raised children’s learning levels using scalable models in government schools.
A common criticism of randomised controlled trials in Economics is that the causal effects they examine are too local in nature. They apply to a particular site, at a particular time, and are specific to aspects of the treatment and how it was implemented etc., Furthermore, results that apply in trials may have different effects when scaled to a population e.g. an effective education intervention scaled to the entire population may affect the labour market in various ways that can't be understood from the trial data. Furthermore, political effects such as public backlash or political corruption may apply differently to well-designed and intensely monitored trials than to large national scaled programmes. The paper goes in detail to many such problems with RCTs and should be read for this reason alone by anyone working on or interested in applying this approach.

So, given these problems, how can we go from the results of such trials to examine policy systems as a whole?. The authors provide an example of a large-scale education study in India where various different approaches were taken, working with the government, to scale an education intervention to 33 million children. They describe how different interventions were developed based on evidence on child learning and then rolled out in different provinces, with many attempts to control for context dependence. They describe the role of RCT data in the context of scale-up and the various challenges that result from an administrative perspective when attempting to scale up interventions that have been shown to be successful at trial stage. The authors draw a number of general lessons about the importance of iteration, close working relationships with administrative bodies, good process evaluation, and a range of other factors that must be in place to allow for successful scale up.


No comments: