What works? A new way to bring more evidence to development

Information and accountability researchers discuss the results of their studies. Photo by: Evidence in Governance and Politics Twitter

Since 2013, teams of researchers have set out to answer this single question across multiple countries: How does providing information to voters affect political behavior and preferences? From India to Mexico to Uganda, the answer remained the same: It doesn’t. The combined data revealed no significant results from information provided in the weeks leading up to an election.

“We were surprised to learn that providing information to voters didn't have the effects that we anticipated, but given the consistency of the results across six studies, we feel much more confident that the aggregate results aren't due to chance or bad luck,” said Susan Hyde, executive director of Evidence in Governance and Politics, or EGAP, a network of researchers and practitioners dedicated to bringing evidence to policy. “Overall, the process has reinforced my belief that collaboration between international development practitioners and researchers is more critical than ever to better understand what works and why.”

At a presentation at the University of California, Berkeley, earlier this month, Hyde displayed graphs of disaggregated results for voter turnout and incumbent vote share. They demonstrated that distributing information that differed both positively and negatively from prior held beliefs about politicians had null results across the projects. But null results are still results, Hyde said.

The findings are the results of the first cluster of studies from the Metaketa Initiative. Metaketa is the Basque word for accumulation, a reference to the way these grantmaking rounds supporting coordinated research at different sites that can result in a consolidation of knowledge. EGAP launched the initiative with funding primarily from the United Kingdom’s Department for International Development, and additional support from an anonymous donor, in one of a growing number of efforts to ensure that the proliferation of randomized control trials, or RCTs, in development economics actually strengthens the research foundations on which policy innovations rest.

This first Metaketa awarded six projects with funding ranging from $175,000 to $300,000 to study the role of information in promoting accountability in developing countries. Upcoming rounds include studies on taxation, natural resource governance, and community policing. While the recently-published results come as a disappointment to some researchers, Hyde said the findings from the first Metaketa serve as an exception to the rule that researchers do not publish negative results because they lack the incentives to do so.

How to include a randomized controlled trial in your project proposal

Randomized controlled trials can be an effective way to determine the true impact of interventions. For development projects, they need to be carefully planned and prepared, to ensure they can provide high-quality results that effectively communicate why an intervention does or doesn't work — improving aid and development programs for the future.

“We are rewarded for statistical significance, splashy findings, being the first one to find something — not for replicating another study,” Hyde said.

These results were more robust and powerful than a null result from a single study, or even several uncoordinated studies, she said. While at first glance the data might seem to call into question the work of organizations that focus on providing information to voters ahead of elections, Hyde explained that there are different ways of interpreting the reasons behind the results. For example, it might suggest that information needs to be provided to voters earlier on, she said, adding that her takeaway is that it is difficult to make voters more informed, not that informing voters is a worthless task.

The findings confirmed what many democracy and governance practitioners have observed in their work in transitioning countries, that information in and of itself does not lead to political accountability, said Linda Stern, director of monitoring, evaluation and learning at the National Democratic Institute.

“It served to put an exclamation point on the idea that exposure to information alone is not enough to change citizens’ political preferences and behaviors,” she told Devex via email. “This has important implications for democracy assistance programs, such as voter-education, candidate debates, anti-corruption campaigns, policy advocacy etc. The findings suggest that information needs to be complemented by other program elements and understood within the broader political economy.”

What was most interesting from the null findings were the unanswered questions that practitioners need to consider when designing and evaluating information-based intervention, such as how the timing and dosage of information comes into play, Stern added.

NDI was one of the first practitioner organizations to partner with EGAP on randomized control trials in its overseas programs. But the Metaketa Initiative takes this a step further, underscoring that findings from single experimental studies cannot be generalized to other democracy assistance programs, Stern said. She added that replicable research and cumulative learning allows groups like NDI to draw broader conclusions about their interventions.

On June 9, EGAP held an evidence summit on elections and political accountability at George Washington University in Washington, D.C. Researchers talked about the field experimental studies they carried out in coordination. The participants also discussed the potential utility of the study for policymakers and practitioners. In a follow-up interview with Devex, Eric Bjornlud, president of Democracy International, Inc. said he hopes the Metaketa Initiative will reinforce and inform a growing focus of development policymakers, funders and implementers on evidence-based programming.

“Currently, funders and implementers are not always explicit or realistic about the theory of change or development hypothesis of their programs,” he said. “Similarly, many are built on assumptions, instincts, and best guesses about what kinds of programs will make a difference.”

The more the global development community knows about what really works, the more realistic its theories of change will be, and the more effective its programs, he said. But he added that in the quest for evidence of impact, there is a danger of dismissing the value of qualitative evaluations and informed judgments, as well as letting the perfect be the enemy of the good.

Representatives of the local groups who partnered with the researchers were also optimistic about the impact of these studies, despite the results. Morrison Rwakakamba of the Agency for Transformation in Uganda said he thinks the results should inspire a culture of debate, explaining that open debates among candidates are critical in this era of misinformation, and could deepen their democracy. Mariana Niembro of Borde Politico in Mexico said the dissemination of data can be more powerful than the fake news and fear campaigns surrounding elections. Despite the persecution and harassment of staff delivering the leaflets as part of the study, she wants to work with the researchers again in the 2018 election.

On that same trip to Washington, D.C., Hyde presented at the U.S. Agency for International Development. The audience was interested in learning more about how this new grantmaking model might offer greater policy relevance than single RCTs, she said.

While RCTs — comparative and controlled experiments — are considered the gold standard in evaluating whether a particular intervention is effective, experts are starting to point to their limitations. In a recent blog post, Lant Pritchett — senior fellow at the Center for Global Development — raises the criticism that RCTs can ignore external validity, or the extent to which the results of a sample in a study can be generalized to a broader population or another context. By taking studies beyond a particular place and moment in time, the Metaketa Initiative is a response to what EGAP calls an ongoing crisis of external validity in development research.

Evidence for Policy Design, a program at Harvard University, is taking a different approach to the challenge of external validity. EPoD brings researchers and policymakers together in a smart policy design methodology that works to identify the policy problem, diagnose the causal factors, design policy innovations, implement and test those designs, and then refine them in a partnership focused on continuous policy improvement.

One of their initiatives — Building Capacity to Use Research Evidence, or BCURE — aims to create a culture of evidence in its countries of focus. DFID funded EPoD to implement a component of their BCURE program — another example of the U.K. agency’s efforts to address the disconnect between the money that goes into RCTs and the insights policymakers can take from them.

“We are pioneering new and innovative approaches to better understand how best we can end poverty and help countries stand on their own two feet, while delivering value for money for taxpayers,” a DFID spokesperson said of the Metaketa Initiative. “This is an innovative program which is in its early stages. We are working closely with partners to ensure research studies deliver rigorous results that can inform future policy decisions.”

A growing number of initiatives aim to generate rigorous evidence that is relevant to policymakers from one country to the next. For example, Innovations for Poverty Action conducted pilots of the Ultra Poor Graduation program, which is designed to graduate households from extreme poverty to a more stable state, across seven countries; and the Digital Credit Observatory, an initiative at the Center for Effective Global Action funded by the Bill & Melinda Gates Foundation, is generating a body of evidence on the impact of digital credit and the efficacy of consumer protection across a range of low-income countries. From governments to foundations, funders are working out new ways to encourage replication studies, so that researchers will produce more of them, journals will publish more of them, and policymakers will have more reliable data with which to make decisions.

As the Metaketa Initiative shares insights from its first cluster of studies and pursues its next grantmaking rounds, Stern of NDI said this model of replicating research across contexts could advance progress on democracy assistance and other targets articulated by the United Nations in the global goals.

“To achieve these aspirational targets, on the one hand practitioners need credible research on what works and what does not work in these areas so that they can be increasingly effective,” she said. “On the other hand, international agencies such as the United Nations need relevant, rigorous and reliable metrics that capture the changes promoted by the Sustainable Development Goals.”

With the Metaketa Initiative, she said, EGAP contributes to both.

Read more international development news online, and subscribe to The Development Newswire to receive the latest from the world’s leading donors and decision-makers — emailed to you free every business day.

You have 2 free articles left
Log in or sign-up to unlock all of the free news on Devex.

About the author

  • Cheney catherine%2520%25281%2529

    Catherine Cheney

    Catherine Cheney is a Senior Reporter for Devex. She covers the West Coast of the U.S., focusing on the role of technology and innovation in achieving the Sustainable Development Goals. And she frequently represents Devex as a speaker and moderator. Prior to joining Devex, Catherine earned her bachelor’s and master’s degrees from Yale University, worked as a web producer for POLITICO and reporter for World Politics Review, and helped to launch NationSwell. Catherine has reported from all over the world, and freelanced for outlets including the Atlantic and the Washington Post. She is also the West Coast ambassador for the Solutions Journalism Network, a nonprofit that trains and connects journalists to cover responses to problems.