Friday, April 24, 2009

Reinventing Foreign Aid Essay #3

This is the third summary and commentary on the essays in Reinventing Foriegn Aid

Use of Randomization in the Evaluation of Development Effectiveness

By: Esther Duflo and Michael Kremer


Like Banerjee and He, Duflo and Kremer advocate using randomized clinical trials to evaluate aid programs where ever possible. The reason they give is that this technique is the best choice for avoiding omitted variable bias. They compare the technique to other popular methods of generating the "counterfactual", or what would have happened had the program not been used, such as Difference-In-Difference (DD). They argue, using data taken from various education programs studied in Kenya, that these techniques tend to overestimate the benefit of aid programs.

As an example of how to properly evaluate programs, they examine the costs and benefits of three education interventions: teacher training, free textbooks and deworming treatments. These interventions where assigned randomly to schools throughout northern Kenya, and student test scores were used to evaluate the benefits. They found, counterintuitively, that the deworming treatments were both the cheapest to administer and had benefits that spread to other nearby schools, as breaking the chain of infections resulted in healthier students overall. They also cited the success of the PROGRESSA program in Mexico as an example of how to use randomized trials to evaluate a pilot program.

They conclude that if the goal of a pilot program is to test scalability and cost-effectiveness, then aid programs need to be treated more like prescription drugs. For these, randomized trials that use a sample of the entire likely recipient population have proven to be the most effective means of determining both safety and effectiveness. Instead of a systematic means of picking the recipients of aid programs, such as one per polity or "most needy", a random distribution across all eligible aid recipients guards against accidentally or intentionally omitting variables when evaluating programs.

The authors go on to discuss several factors required for effective randomized trails:

(1) A large pool of potential recipients, such as small villages or individual students.

(2) A politically acceptable means of randomly distributing the aid. This is often enabled by budget constraints and the need to "pilot" aid programs.

(3) An independent, or at least reputable, group with plenty of funding to perform the evaluations.

(4) Clear and believable metrics that define benefits and costs.


In other words, randomized trials are a good tool for development aid Planners, groups like the World Bank or CARE, to help them decide which new ideas they should roll out in a large way. They are not necessarily the right ones for "Seekers", to use Dr. Easterly's term, who generally aim to solve specific local problems that do not have a large potential recipient base. A criticism that can, the authors admit, be made against this approach is if it is so effective, why isn't it used to evaluate health and schooling programs designed to help the poor in developed countries? The answer lies partly in the implicit arrogance of development aid being something we can impose on other societies, while most forms of development and economic planning, as well as many forms of educational evaluation, are viewed askance in the developed world.

The next essay discusses the political economy of aid evaluations, which will explore this problem more deeply. Smaller aid programs with narrow scope also need some form of independent evaluation, and I think that's where a travelling pie-maker would fit in nicely. Now to work out the logistics of the Sustainable Baking Travel Blog.

No comments: