Hands held out under a stream of clean water. Photo by: USAID / CC BY-NC

How do we make sure our foreign aid is effective? This is not a theoretical question, but an existential one for USAID. For more than 50 years, the United States Agency for International Development, along with other donors, has been wrestling with the question of how to understand the effects of our programming.

There are a number of ways to understand whether foreign aid is or isn’t working. We need to be concerned with relevance, effectiveness, efficiency, impact, and sustainability, all of which happen to be the OECD/DAC Evaluation Criteria, which are currently under consideration for adaptation.

And as many development workers know, the kinds of questions you ask during evaluation are also particularly useful to consider when designing aid programs: Is what we’re doing suited to the priorities and policies of the target group, recipient and donor? Is what we’re doing achieving its objectives? How efficient, or cost-effective, is what we’re doing compared to alternatives? What are the changes produced, whether direct or indirect, intended or unintended? How likely is it that the results being produced can be obtained without our support?

These criteria underscore the difficult trade-offs that are inherent in decisions about aid. And often the answers to these questions do not all point in the same direction.

Given this, I find interesting the attention given recently to a cash benchmarking experiment USAID is undertaking.

Cash benchmarking measures the outcomes of unconditional household grants to use as a benchmark against which to compare the cost-effectiveness of aid programs. If you see better outcomes from household grants than you do with aid programs, then you know that giving grants is the way to go, right? Perhaps, or perhaps not.

Answering a cost-effectiveness question alone is not good enough for USAID programs. We have to wrestle with the cost-effectiveness question at the same time as we consider: 1, our national security objectives; 2, the priorities of our partner countries; 3, the likelihood that what we do will put countries on a path to plan, finance, and implement their own development, what we are calling the Journey to Self Reliance; 4, the intended and unintended consequences of our programs; and 5, whether our program results match our objectives.

So, while we are watching the USAID pilots around cash benchmarking with cautious optimism, we understand it has clear limitations — ultimately we need to factor the many considerations into our design, management, and evaluation of programs.

The 21st-century reality of development is that we work in environments that are unstable, in transition, or consistently evolving. So we need a range of approaches for gathering evidence.

“We have ready access to a great deal of data and evidence that others have collected; it simply isn’t feasible or responsible for us to commission research when we can learn from what others are doing.”

In recent years, USAID has really pushed the boundary in this space, with a number of efforts to test measurement methods and approaches and ensure our monitoring, evaluation, and learning “toolkit” is fit for purpose. Now more than ever, we have ready access to a great deal of data and evidence that others have collected; it simply isn’t feasible or responsible for us to commission research when we can learn from what others are doing.

Indeed, there are donor and evaluation practitioners who are working to compile evidence repositories so we can all benefit. Certainly, we have a real challenge in ensuring that staff who need the evidence can access it at the right time and in a way that’s accessible to them. That’s something we are working on as part of a broader transformation effort at USAID — where the new policy, resources, and performance bureau, beyond just aligning budget, strategy, and learning, will set clear agency standards around evidence and program performance.

I am encouraged by the many examples we already have of USAID staff seeking out and acting on evidence to improve the development results our programs are getting — that program in Rwanda that was the subject of the first cash benchmarking pilot? USAID Rwanda used the results to adapt its water, sanitation, and hygiene interventions, in line with emerging research findings from around the globe.

These adaptations include prioritizing, providing better access to clean water, and increasing the frequency of behavior change communications. Here at USAID, we are committed to innovative research, but more importantly to use that data to advance our mission. As others have noted, how we act on evidence is the most interesting part of the story.

The views in this opinion piece do not necessarily reflect Devex's editorial views.

About the author

  • Melissa Patsalides

    Melissa Patsalides is the acting deputy assistant administrator in USAID’s Bureau for Policy, Planning, and Learning. She is a seasoned international development professional with nearly 20 years of experience focused on program planning and effectiveness. She previously directed USAID’s Office of Learning, Evaluation, and Research.