
EDITOR’S NOTE: How do we know when a development project works or not? Center for Global Development senior fellow Michael Clemens discusses 3 revolutions in development economics: Methods, materials, and medium.
How can we know when an aid project “works?”
Nina Munk on Tuesday released a new book on the Millennium Villages Project, an intensive experimental ”solution to extreme poverty” underway across rural Africa. Munk wrote her book, The Idealist, after observing the project firsthand for six years, and her account is sympathetic to its founder while deeply critical of the project itself. For Joe Nocera of the New York Times, the book makes it “tough to believe” the project is succeeding; for James Traub in the Wall Street Journal, the book shows how the project was “beset by immemorial forms of misfortune that Mr. Sachs’ team in New York hadn’t counted on.”
Right, but how do we know when a project works or doesn’t? There’s bad news and good news. The bad news is that development economists have made little progress on answering big questions like how to “solve poverty.”
The good news is that they have gotten much better at telling the difference between a project that does what it says it can do and one that does not. There is a new transparency in development economics. In a new CGD working paper, Gabriel Demombynes and I discuss how a new transparency is changing the debate about what works.
The new transparency comes from three revolutions in development economics over the last decade: Methods, materials, and medium.
The revolution in methods is that development economists have adopted a new set of scientific tools previously used mostly by psychology and medicine. These methods help analysts more carefully measure the true effects of a project, and cut back on confirmation bias and spurious correlation.
The revolution in materials is the spread of open-access data, which makes it much easier for scientists to check and critique each other’s work.
The revolution in medium is that research discussions are more frequently happening on blogs, which means rapid back-and-forth and a lasting, open-access record of the debate. Such discussions used to happen only in slower journals or in more opaque private discussions and closed seminars.
In our new paper, Gabriel and I illustrate these three revolutions with the story of the public controversy on the impact of the Millennium Villages intervention. The project claimed to have scientific evidence that improvements at the village sites were caused by the project. We explain how a simple impact evaluation tool called “differences-in-differences” revealed that some of the project’s statements of its own impact were unscientific and wildly inaccurate, leading to a scientific scandal and the stinging retraction of research findings from one of the top scientific journals. We explain how we were only able to do our research because open-access data from an independent agency were available to check claims that the Project made based on its closed, confidential data. And we explain how much of this interaction happened on blogs, giving it a speed, accountability, and transparency that could not have existed in earlier years.
New methods, materials, and media profoundly shaped the debate. Our paper draws out five lessons for development research and policy.
The new transparency is good news for the ethics of aid work. In an interview with Munk, project founder Professor Jeffrey Sachs dismissed this controversy as “armchair criticism” of his project, a dilettantish “spectator sport.” But in my view only ethical way to conduct aid interventions is to transparently show that they do what they say they can do, particularly when those projects are packaged as resting on science.
Aid money is scarce; spending it on one intervention means not spending it on something else that can do much more good. While the Millennium Villages Project burns through thousands of dollars for each single household affected, the same money could have been used to provide a different intervention to hundreds of households — rather than one household — such as exclusively providing anti-malaria bednets. Diverting money away from alternative valuable uses is most ethically done based on sound, accurate evidence. Sound and transparent evaluation, far from being a spectator sport, is an ethical imperative.
The old guard at this project and many others continues to insist that careful scientific evidence is unnecessary and that it’s unethical to experiment on people. A younger wave of development economists doesn’t accept the opacity of statements like this, statements that amount to “trust me.” Chris Blattman has a great recent post with a strong response to governments and organizations in general who discard transparent impact evaluation to avoid “experimenting” on people:
“Let me be blunt: When you give stuff to some people and not to others, you are still experimenting in the world. You are still flipping a coin to decide who you help and who you don’t, it’s just an imaginary one. You’re experimenting with your eyes closed.”
The new transparency is changing that. Change doesn’t come easily, but it’s a very good sign.
Edited for style and republished with permission from the Center for Global Development. Read the original article.