When RCTs aren't a good idea
As pressure increases on foreign aid groups to showcase the impact of their work, randomized control trials have become more common. And yet, evaluating impact using such control groups remains controversial — and it isn't always a smart idea. Here's why.
By Claire Luke // 18 February 2015As pressure increases on foreign aid groups to showcase the impact of their work, randomized control trials have become more common. And yet, evaluating impact using such control groups remains controversial — and it isn’t always a smart idea. “Implementing organizations are increasingly being asked or being pushed by donors to measure impact, and often it doesn’t make sense for them to do so,” said Annie Duflo, executive director of Innovations for Poverty Action, one of the major nonprofits that conducts RCTs to measure the impact of development programs. IPA will publish a toolkit later this year to help organizations decipher when they should — and shouldn’t — conduct RCTs. The “Goldilocks Project,” as it’s called, is meant to help institutions decide how to create a monitoring and evaluation plan that’s not too small or large for a program but “just right.” The growing focus on measuring impact is a good thing, said Delia Welsh, senior director of IPA’s M&E initiative. Sometimes, though, resources are misallocated. “Sometimes organizations without the bandwidth are using resources for impact evaluations that aren’t going to give them accurate results and they would have been better off focusing on monitoring systems,” she said. “They shouldn’t invest in bad impact evaluation; bad methods or bad data take organizations away from the most important priorities.” Here are 10 scenarios that should lead you to question whether complex impact evaluations are a good idea for you, courtesy of IPA: 1. If you’re small. RTCs take time and resources — precious commodities that are in short supply for many small institutions. Those should be particularly careful in the project design stage. Determine why a project will have an impact, how to measure such impact, what the key inputs are to achieve the desired impact, then monitor these. 2. If your findings don’t answer your question. To analyze, you must have a clear research question and your data must be purposeful and credible; “bad” data and poorly analyzed data can be worse than having no data at all. Characteristics of “bad” data include a sample size that’s too small, a model that can’t handle spillovers, difficulty to construct a counterfactual (related to resource constraints, for instance), data that’s too focused on macroeconomic issues like monetary policy or trade, surveys that don’t capture need and data that’s not sound. Make sure surveyors are trained and their work is cross-checked for quality and a lack of bias, and keep the attrition of survey respondents in check. 3. If your results are not actionable. Have a plan for all possible reasonable outcomes from the data and the right systems to handle the information. The evaluation should lead to a decision for an organization on whether to scale up or not, and how. 4. If the cost is too much for your organization’s budget. RCTs are expensive and data should generate enough guidance to warrant the expense. 5. If the evaluation is not transferable. Results should add to practical knowledge and not be so specialized that lessons can’t be widely applied to other programs or organizations. RCTs should share successes or failures and have external validity to inform other programs. 6. If you do not have monitoring systems. Have a mechanism in place to provide accurate and credible data that’s sized appropriately. The data can’t be too big or too small, and it must be managed in the course of an evaluation. 7. If you want to measure outputs, not outcomes. If your organization seeks to answer what you used, did or produced from inputs and activities, not how the intervention changed something compared to what would have happened if the program did not exist. 8. If the program already has an evidence base. If you have a lot of evidence already, you may want to focus on strong monitoring. We already know that vaccines work, for example, so there’s no need to keep running RCTs to prove it. 9. If you’re still developing your model. If a program isn’t ready for prime time, generating a lot of data could be unnecessary, counterproductive and harmful. Resolve basic kinks first, but address M&E before a project is rolled out on a major scale. 10. If you or your partner aren’t seriously curious to knowing and committed to acting upon results. Your partner should care deeply about the research question and want to know whether or not a program worked in order to inform future interventions. Do you agree with these guidelines — and are foreign aid donor policies aligned with them? Chime in by leaving a comment below. Join the Devex community and access more in-depth analysis, breaking news and business advice — and a host of other services — on international development, humanitarian aid and global health.
As pressure increases on foreign aid groups to showcase the impact of their work, randomized control trials have become more common. And yet, evaluating impact using such control groups remains controversial — and it isn’t always a smart idea.
“Implementing organizations are increasingly being asked or being pushed by donors to measure impact, and often it doesn’t make sense for them to do so,” said Annie Duflo, executive director of Innovations for Poverty Action, one of the major nonprofits that conducts RCTs to measure the impact of development programs.
IPA will publish a toolkit later this year to help organizations decipher when they should — and shouldn’t — conduct RCTs. The “Goldilocks Project,” as it’s called, is meant to help institutions decide how to create a monitoring and evaluation plan that’s not too small or large for a program but “just right.”
This story is forDevex Promembers
Unlock this story now with a 15-day free trial of Devex Pro.
With a Devex Pro subscription you'll get access to deeper analysis and exclusive insights from our reporters and analysts.
Start my free trialRequest a group subscription Printing articles to share with others is a breach of our terms and conditions and copyright policy. Please use the sharing options on the left side of the article. Devex Pro members may share up to 10 articles per month using the Pro share tool ( ).
Claire is a journalist passionate about all things development, with a particular interest in labor, having worked previously for the Indonesia-based International Labor Organization. She has experience reporting in Cambodia, Nicaragua and Burma, and is happy to be immersed in the action of D.C. Claire is a master's candidate in development economics at the George Washington Elliott School of International Affairs and received her bachelor's degree in political philosophy from the College of the Holy Cross.