• News
    • Latest news
    • News search
    • Health
    • Finance
    • Food
    • Career news
    • Content series
    • Try Devex Pro
  • Jobs
    • Job search
    • Post a job
    • Employer search
    • CV Writing
    • Upcoming career events
    • Try Career Account
  • Funding
    • Funding search
    • Funding news
  • Talent
    • Candidate search
    • Devex Talent Solutions
  • Events
    • Upcoming and past events
    • Partner on an event
  • Post a job
  • About
      • About us
      • Membership
      • Newsletters
      • Advertising partnerships
      • Devex Talent Solutions
      • Contact us
Join DevexSign in
Join DevexSign in

News

  • Latest news
  • News search
  • Health
  • Finance
  • Food
  • Career news
  • Content series
  • Try Devex Pro

Jobs

  • Job search
  • Post a job
  • Employer search
  • CV Writing
  • Upcoming career events
  • Try Career Account

Funding

  • Funding search
  • Funding news

Talent

  • Candidate search
  • Devex Talent Solutions

Events

  • Upcoming and past events
  • Partner on an event
Post a job

About

  • About us
  • Membership
  • Newsletters
  • Advertising partnerships
  • Devex Talent Solutions
  • Contact us
  • My Devex
  • Update my profile % complete
  • Account & privacy settings
  • My saved jobs
  • Manage newsletters
  • Support
  • Sign out
Latest newsNews searchHealthFinanceFoodCareer newsContent seriesTry Devex Pro
    • News
    • Monitoring & evaluation

    5 tips for measuring the hard to measure

    Monitoring and evaluation has become ever-more important for the development community, but the nature of aid work often makes it difficult to carry out high-quality assessment of impact. Devex hears from experts convened at the Overseas Development Institute about how to prepare for and mitigate those obstacles.

    By Sophie Edwards // 05 February 2018
    LONDON — While monitoring and evaluation is now seen as essential to good development — especially as the sector falls under increasing scrutiny and pressure to demonstrate impact and value for money — the nature of aid work often makes it difficult to carry out high-quality measurement. Issues such as how to measure abstract concepts like accountability; working in fragile states where politics can get in the way; and navigating big multipartner projects which include multiple actors measuring different things in different ways, can make evaluation a tricky and time-consuming endeavor. With this in mind, the Overseas Development Institute, a London-based think tank, recently convened a panel of evaluation experts to share their thoughts on the question of “how to measure the hard to measure.” Devex highlights five key tips from the event. 1. Focus the evaluation When designing an evaluation, Tiina Pasanen, a research fellow at ODI, stressed the importance of “thinking through what is the most important question for your program” and working from there. Too often, people start from broad sets of evaluation criteria — such as that provided by the OECD’s Development Assistance Committee in its Principles for Evaluation of Development Assistance — which can result in wasting time and resources measuring things that are not so helpful to the project, she warned, “People often use [the DAC guidelines] as a laundry list and include everything possible without thinking through whether this is really useful or feasible to do at this point,” she said. The DAC criteria should be used to “support your thinking,” not dictate it, she advised. To help with this process, ODI has produced guidance on how to assess the plausibility, use and users, and feasibility of a proposed evaluation. 2. Measure for both accountability and learning However, experts at the session also advised against making evaluations too narrow, and recommended including indicators related to learning as well as project delivery. “These can encourage the use of monitoring and evaluation to learn more about the underlying problem, context and what’s working more and less well in program implementation,” according to Anne Buffardi, senior research fellow in the Research and Policy in Development program at ODI. Kate Dyer, an independent consultant who led Accountability in Tanzania (AcT), a 31 million British pounds ($44.1 million) DFID-funded program to increase the responsiveness and accountability of the Tanzanian government, agreed, saying it is important to see evaluation as a tool for accountability, but also for learning. “Usually there’s a tension between ‘are you doing [the evaluation] for learning purposes,’ [or] ‘are you doing it for accountability purposes,’” she said, but both are necessary and can be complementary. “The learning should … force you to question the way you frame the measurement challenge, and especially when that asks you what constitutes a result and a result that has significance for whom, I think that can start continually rebalancing the program around benefits for citizens,” she said. 3. Identify and mitigate the challenges While all agreed that evaluators working on development programs encounter difficulties, these issues are distinct and can be mitigated through specific strategies, Buffardi said. “Many measurement challenges are relational and political, not necessarily technical, and do not have simple solutions,” she said. At the same time, “other challenges — like the setting, concept being measured, implementing structure, and pathways of change — pose distinct threats to validity and reliability,” she said. It is therefore important to identify and clarify which specific measurement challenge or challenges the program faces in order to help mitigate them, Buffardi added. 4. Plan early, plan carefully, and keep training “It is worth spending a lot of time … [and] being meticulous” at the evaluation planning stage as this will save time and effort down the line, according to Samuel Addai-Boateng, a monitoring and evaluation specialist working for CARE International. Addai-Boateng outlined some of the challenges his team encountered during a multipartner $10 million social accountability program in Ghana funded by USAID, especially around how to accurately estimate the impact the project was having. “We got to mid-term of the project, at which point USAID was asking us for outcomes,” he said, but his team was unsure whether partners were “overclaiming or underclaiming” when it came to reporting impact. The nature of what they were trying to measure — including transparency, accountability and answerability among public officials — as well the multiple actors involved, made the evaluation process difficult, he explained. Fortunately, Addai-Boateng had just attended a training on “contribution tracing” — an evaluation method that can be especially useful for assessing the impact of policy, advocacy, and lobbying programs, since it looks at how to infer causality and reduce uncertainty about the contribution a program is having through observed results. Seeing the obvious applicability to the project, his team opted to introduce contribution tracing to the evaluation, which he said has drastically improved the quality of evidence being collected across all partners. He also said USAID is enthusiastic about the new reports. “Contribution tracing has helped us to look at data from different perspectives,” he said, adding that “we realized that … some evidence has more value than others … and [so can] concentrate not on collecting all the data but only the data that increases our level of confidence in our claim,” he said. His main regret, he said, is not starting earlier, advising evaluators to spend more time planning and thinking about the evaluation and putting the right “systems” in place before starting. 5. Take advantage of online resources and tools The experts said there are many useful online resources that evaluators can take advantage of to design effective evaluations, even in the trickiest environments. Some of those highlighted include BetterEvaluation, which Dyer described as the “world’s biggest crib sheet” for evaluation; the Evaluation Methods Lab toolkit; Outcome Mapping Learning Community — and even one handbook that features penguins.

    LONDON — While monitoring and evaluation is now seen as essential to good development — especially as the sector falls under increasing scrutiny and pressure to demonstrate impact and value for money — the nature of aid work often makes it difficult to carry out high-quality measurement.

    Issues such as how to measure abstract concepts like accountability; working in fragile states where politics can get in the way; and navigating big multipartner projects which include multiple actors measuring different things in different ways, can make evaluation a tricky and time-consuming endeavor.

    With this in mind, the Overseas Development Institute, a London-based think tank, recently convened a panel of evaluation experts to share their thoughts on the question of “how to measure the hard to measure.”

    This story is forDevex Promembers

    Unlock this story now with a 15-day free trial of Devex Pro.

    With a Devex Pro subscription you'll get access to deeper analysis and exclusive insights from our reporters and analysts.

    Start my free trialRequest a group subscription
    Already a user? Sign in
    • Institutional Development
    • Project Management
    • London, United Kingdom
    Printing articles to share with others is a breach of our terms and conditions and copyright policy. Please use the sharing options on the left side of the article. Devex Pro members may share up to 10 articles per month using the Pro share tool ( ).
    Should your team be reading this?
    Contact us about a group subscription to Pro.

    About the author

    • Sophie Edwards

      Sophie Edwards

      Sophie Edwards is a Devex Contributing Reporter covering global education, water and sanitation, and innovative financing, along with other topics. She has previously worked for NGOs, and the World Bank, and spent a number of years as a journalist for a regional newspaper in the U.K. She has a master's degree from the Institute of Development Studies and a bachelor's from Cambridge University.

    Search for articles

    Related Stories

    Sponsored by AmgenOpinion: Are health systems really measuring what matters?

    Opinion: Are health systems really measuring what matters?

    Career6 avenues open to someone who lost their job due to the USAID crisis

    6 avenues open to someone who lost their job due to the USAID crisis

    #Globaldev SkillsHow to succeed as a monitoring, evaluation, and learning specialist

    How to succeed as a monitoring, evaluation, and learning specialist

    Sponsored by AmgenStrengthening health systems by measuring what really matters

    Strengthening health systems by measuring what really matters

    Most Read

    • 1
      How low-emissions livestock are transforming dairy farming in Africa
    • 2
      The UN's changing of the guard
    • 3
      Opinion: Mobile credit, savings, and insurance can drive financial health
    • 4
      Opinion: India’s bold leadership in turning the tide for TB
    • 5
      The top local employers in Europe
    • News
    • Jobs
    • Funding
    • Talent
    • Events

    Devex is the media platform for the global development community.

    A social enterprise, we connect and inform over 1.3 million development, health, humanitarian, and sustainability professionals through news, business intelligence, and funding & career opportunities so you can do more good for more people. We invite you to join us.

    • About us
    • Membership
    • Newsletters
    • Advertising partnerships
    • Devex Talent Solutions
    • Post a job
    • Careers at Devex
    • Contact us
    © Copyright 2000 - 2025 Devex|User Agreement|Privacy Statement