As development institutions like the World Bank look for ways to make their projects and interventions more evidence-based, development practitioners have run into a problem faced by doctors a generation ago.
While there may be a plethora of primary source studies on the effectiveness of an intervention, studies use different data and methodologies and often come to differing — and sometimes conflicting — conclusions.
In an effort to make the medicine more evidence-based, medical researchers came up with “systematic reviews” of the research literature in the 1990s. The reviews are meta-analyses of data from a range of studies that have been screened for quality. Today, systematic reviews — known as “Cochrane reviews” — for the nonprofit that oversees them — are seen as a gold standard in judging the effectiveness of medical interventions.
Now some major development institutions like the U.K. Department for International Development and the Bill and Melinda Gates Foundation, along with The International Initiative for Impact Evaluation, or 3ie, an organization that funds research in development, are leading the way in developing systematic reviews of development interventions — and the World Bank may soon follow their lead. Representatives from the bank discussed the advantages of applying this method at a panel discussion held last week by the bank’s Independent Evaluation Group, which is already using systematic reviews as a part of some of its sectoral evaluations.
Many development practitioners, when they look at past evidence at all, may use “goal-scoring” to determine what the research has to say about a particular type of project. That is, for example, four studies say it worked and three studies say it doesn’t — so it probably worked. Or they throw up their hands and say that the evidence is inconclusive, and do what they were going to do anyway, using selective evidence.
Systematic reviews provide a more methodologically correct way to look at the data.
“There's no such thing as mixed evidence or mixed findings; there's incorrectly synthesized or summarized findings,” Howard White, executive director of 3ie, said at the panel. “If you conduct a proper systematic review then you will know more conclusively whether this intervention works or not.”
White used the example of the use of corticosteroids to prevent the death of infants born prematurely as an example of how that type of goal scoring led doctors to the wrong conclusions. In the case of that intervention, more studies found it to be ineffective than effective, so doctors may have abandoned the practice. But a systematic review then found that the treatment was effective in 30-40 percent of cases, had no negative side-effects and was very inexpensive. The intervention has now been widely adopted, and has saved thousands of lives.
The case for systematic reviews
In the world of international development, systematic reviews can also yield important and unexpected insights that go beyond whether something works or it doesn’t.
White cited a systematic review of land titling projects that found that they do increase investment, productivity and farm income, but they do not increase access to credit. And a sub-analysis found that the positive effects were not born out in African countries.
In another example, systematic review of studies related to payment for ecological services (like paying communities not to cut down a forest) discovered that type of intervention is not cost-effective for preventing deforestation, because it only prevented deforestation in 3.7 percent of the land being paid for. The rest of the land would not have been deforested anyway.
Qualitative studies often support the conclusions of the systematic reviews or point to hypothesise about unexpected conclusions. In the case of land-titling in Africa, for example, qualitative studies pointed out that the customary title was generally strong, so land-titling may have less of an effect, and that a lower levels of assets among landowners might also prevent investment.
3ie and the International Development Coordinating Group at the Campbell Collaboration, a network of researchers and organizations coordinating systematic reviews relevant for developing countries, have completed dozens of reviews in a number of sectors. Currently, sectors like health and education have a broad evidence base, while infrastructure, for instance, does not.
Still, Marie Gardner, IEG public sector evaluations manager, pointed out how little evidence is used in project design in general, citing a review of project documents at the World Bank that her team had done.
"In no cases did the project documents refer to systematic reviews to provide evidence that what they were proposing in the design actually worked. In very few cases did they actually refer to impact evaluations," she said.
As systematic become more widely accepted and utilized, that may change.
Read more development aid news online, and subscribe to The Development Newswire to receive top international development headlines from the world’s leading donors, news sources and opinion leaders — emailed to you FREE every business day.