The SDGs: How will we know if we achieved them?

By Meghann Jones 25 September 2015

The 17 sustainable development goals projected onto the United Nations headquarters. Proving that financing the SDGs is a worthwhile investment is a challenge the development community needs to overcome. Photo by: Cia Pak / U.N.

As the United Nations agrees on the sustainable development goals, attention will be increasingly focused on how to finance them, and how the development community will prove that doing so is a worthwhile investment.

The debate about development financing occurs in an environment where donor country citizens agree on the importance of development goals, but are not unanimous on the benefits and effectiveness of foreign aid (as shown in new Ipsos opinion data from 17 established and emerging donor countries). To address this, the development community will need to rigorously evaluate the impact of development programs and ensure that investments result in real and sustained change.

Impact evaluation has evolved by leaps and bounds in recent years. However, good evaluation requires good data, and the programs under the microscope — by their nature — are conducted in environments where it is often challenging to collect good data. Overcoming this challenge should be a key focus for evaluators as they attempt to demonstrate the development community’s progress toward achieving the SDGs.

Ipsos’ experience “at the coalface” of data collection in challenging environments has revealed several hard lessons that go beyond conventional wisdom to the heart of how we design impact studies in the first place:

1. Don’t try to learn everything.

Problem: Impact studies often have very long survey instruments — sometimes running three hours or more. Long interviews fatigue interviewers and respondents alike, leading to mistakes and reduced data quality.

Reason: Sometimes evaluators are not clear on program objectives (and therefore what to measure) — either because stakeholders cannot articulate the objectives or because they have competing objectives. This, combined with risk-aversion and ever-shifting goal posts, leads evaluators to an “everything including the kitchen sink” approach to survey instrument development.

Solution: Stress the importance of alignment on objectives (perhaps through a theory of change) and allow ample time to develop, test and review a survey instrument focused solely on the agreed objectives. Researchers should hold firm on instrument length — short, focused questionnaires that produce good data are better than long questionnaires that produce unusable data.

2. See it from the respondents’ point of view.

Problem: Development is frequently about improving the financial situation of beneficiaries, and impact studies often rely heavily on numeric variables (like income) that are prone to error. Interviewer training and data cleaning can only go so far toward correcting for these.

Reason: Numeracy, recall and interviewer error are key culprits of bad data, oftentimes because questions are created by research teams who see things differently than how respondents. An example is when people exchange goods for other goods, rather than for money. Even though an income question may be semantically clear, monetary response categories may not give the respondent an opportunity to answer in a way that reflects their circumstances, rendering the data meaningless.

Solution: Observe the population you are surveying — if the people you are interviewing exchange the coffee beans they grow for grains and vegetables, a typical household income question will not work. Think about how respondents live and seek proxies for items that are difficult to measure. In this example about measuring household wealth, you might look at such as ownership of various household items or the ability to afford basic expenses. If you have a hard time envisioning your respondents’ context, consider some observational research in advance of developing your survey instrument.

3. Embrace reality.

Problem: Formal methodological approaches to impact evaluation are unable to be implemented in the field, leading to a breakdown of the study.

Reason: Evaluation design relies on a lot of scientific theory, which often assumes certain things are possible in data collection — things like a known sample frame, the ability to enumerate and the availability of a particular respondent. However, these assumptions do not always hold true in some of the challenging contexts where development programs are taking place.

The illustration depicts some “interviewer notes” from a recent impact evaluation in a rural setting that illustrates how theoretical assumptions can be challenged.

Solution: Consult with experts from the location the data will be collected on the evaluation design itself. Do not expect your data collection partner to implement what is in your imagination if you do not consult with them — it is easier to mitigate for expected error at the design stage than it is to fix actual error after the fact.

4. Be culture-aware.

Problem: It is sometimes difficult to get communities to participate in impact studies.

Reason: We frequently think about access in terms of sensitivities and convenience — how to make participants comfortable and conduct the fieldwork so as not to intrude on other priorities. But how we engage with communities in the first place can impact heavily on our ability to execute the research. For example, in many communities there are protocols that must be observed before it is appropriate to interview people. The village chief may require the team to interview “village leaders” first, and ignore requests for random selection of respondents. When local people collect data — something Ipsos always recommends and practices — they will want to follow these norms. It might be better to try to build cultural norms into the design, that is, to interview everyone in the village.

Solution: Understanding cultural and social dynamics is a critical factor in fieldwork logistics. Work with your data collection partner from the very beginning of the evaluation to ensure that these local dynamics are factored into — and even used to the advantage of — the design itself.

It is vital that organizations work with research and data collection experts embedded in contextual realities who can bring both formal expertise and local insight into the design of research. Smart research will enable a realistic assessment of the progress of the international development community toward achieving the SDGs, and enable governments and the public to make good decisions about where their money makes the most impact.

Visit the Ipsos news center to see the data and other articles in this series. Join the Devex community and access more in-depth analysis, breaking news and business advice — and a host of other services — on international development, humanitarian aid and global health.

About the author

Edit1
Meghann Jones

Meghann Jones is head of international research and evaluation for Ipsos Public Affairs in the United States.


Join the Discussion