How MEL systems provide answers for development in a warming world

By Sarah Standley, Robbie Gregorowski, Dave Wilson 22 April 2016

Sarah Park from WorldFish handles a community workshop to discuss the evaluation of climate change adaptations in Batugade, Timor-Leste. As government efforts and mitigation funding pour into innovation, climate adaptation and resilience programs, there is strong demand for effective monitoring, evaluation and learning systems. Photo by: WorldFish / CC BY-NC-ND

Many development practitioners will have observed the step change in the scale of global investments made toward climate change in recent years. This is reflected in the official figures globally and nationally with the U.K. government announcing a 50 percent increase in financial support for cleaner, greener growth and for measures to help the world’s poorest adapt to climate change before 2020.

Many donor agencies now have “innovative” climate change adaptation or resilience programs, with the latest initiative announced in Paris spearheaded by the U.N. secretary-general himself — the Climate Resilience Initiative: Anticipate, Absorb, Reshape. The massive increase in climate change adaptation and mitigation funding means there is strong demand for effective monitoring, evaluation and learning systems.

Applying MEL systems to climate change

MEL systems, such as those designed by the climate resilience team at Itad, seek to ensure that climate finance is spent effectively, efficiently and sustainably in the face of worsening climate shocks and stresses.

So how can we make sure that all commitments made at COP21 in Paris last December are translated into tangible results?

First, there is an urgent need to develop a deeper understanding about how best to respond to climate change against a backdrop of limited time and resources. Climate change is itself a complex political economy issue, which knows no boundaries and affects actors and systemic processes across multiple scales and contexts. Therefore climate and development programs need to operate at all levels of social, economic and political systems. This makes results very difficult to monitor.

Second, there is a need to develop public support for investments to address the impacts of climate change. In the context of global economic, health and security crises, there are competing priorities for public finance. Governments need to demonstrate that climate-related official development assistance delivers results and represents good value for money.

Accountability

Of importance to the taxpayer and other financial contributors is the efficiency of climate finance expenditure. Accountability is the traditional purpose of evaluation and it is still critically important to MEL systems in the climate change context.

For example, the knowledge manager for the U.K. Department for International Development’s BRACED program is helping determine the total number of people whose resilience has been strengthened as a result of the program. The idea is that by reporting robustly on a small set of key performance indicators, it should be possible to identify and aggregate the total number of people a project, a wider program, and ultimately an entire climate fund has reached, providing a clear justification to the taxpayer.

However, KPI results reporting across very different contexts is fraught with methodological shortcomings (including issues of abstraction and aggregation) and offers little to support broader lesson learning.

An informative, evaluative learning purpose

MEL systems have a role in thoroughly answering questions such as what works, in what contexts, and why. This is essential if past experience is to inform future climate and development policy and programming.

An “evaluative learning” approach prioritizes understanding audiences and their knowledge needs from the outset, and by asking questions such as: Who are the key audiences for this evaluation? What do we know about how they consume and apply evidence and knowledge? How will they “use” and apply the findings and conclusions of the evaluation?

Itad is not the only organization taking this approach; see for example DfID’s work on climate learning and Michael Quinn Patton’s work on utilization focused evaluation.

Evaluative learning answers questions of “explanation” — meaning how, where, when and why do climate change interventions work; what can be learned; and how can good practice be replicated?

To answer these questions, we are increasingly adopting, adapting and applying “realist evaluation” approaches to our climate change MEL systems.

Realist evaluation assumes that the context makes important differences to the outcomes; that no intervention works everywhere, or for everyone; and focuses on providing explanations of why interventions may or may not work, in what contexts, circumstances, and for whom. This approach is particularly important in the context of climate change as a complex problem, where climate-related shocks and stresses are unpredictable and continually evolving. The U.K government’s 5.8 billion pound ($8.33 billion) International Climate Fund has a new MEL system, which applies realist evaluation for this reason.

Can an adaptive and flexible programming approach help?

One potential weakness of evaluative learning approaches applied to conventional climate change programming is that evidence and learning tends to be generated “ex-post,” meaning at the end of the relatively long three-to-five-year cycles. This means learning to support change is often sluggish.

A new approach to generate more timely and focussed evidence and learning is to adopt the principles of “adaptive and flexible programming” by recognizing two things: First, climate change is a serious problem, which is socially complex and characterized by unforeseen consequences, myriad potential responses, and a wide set of stakeholders. Second, the relationship between cause and effect is complex and emergent, and therefore appropriate actions to address climate threats needs to take the form of “probe-sense-respond” experimental feedback loops.

Adopting these principles and incorporating them into MEL systems requires shorter learning loops — “act-observe-reflect-plan” — than traditional monitoring and evaluation approaches.

Communities of Practice, such as the Rockefeller Foundation’s Resilience Measurement Community of Practice, can help in facilitating progression through these learning loops, while filling gaps in knowledge and understanding relevance to a particular climate program.

The challenge in the context of climate change policy and programing is defining an appropriate time scale and frequency on which to base the evidence generation and learning loops. Tailoring this to the context of the program and needs of the end-user is important, with the current emphasis placed on “intra-annual” learning loops, which facilitate multiannual reflection, course correction and adaptation.

While best practice in adaptive and flexible programming is still evolving, it is important that:
•  Programs themselves need systems for planned short-cycle experimentation, piloting and testing to designed-in from the outset.
•  Adaptive and flexible programming cannot be the sole “driver” of program implementation, particularly when climate change programs are operating in complex and unpredictable contexts, which often slow and constrain implementation.
• Linkages between core program functions, particularly MEL, knowledge management, planning and strategy, need to be coherent and synergized.

Optimizing the value of MEL systems

Given the optimism surrounding the global climate agreement, and a corresponding shift in emphasis from negotiation to implementation, MEL systems are more prominent and important than ever.

Going forward, these systems need to be designed with two key ideas: First, to serve multiple purposes, from ensuring accountability in how the billions committed through climate finance are spent, as well as supporting a deeper understanding of the results this finance delivers — what works, in what contexts and why? Second, they should ensure that lessons are learned rapidly to support both course correction as well as scaling-up and scaling-out investments, in the face of intensifying climate shocks and stresses.

We welcome the experience and insights of others as we strive to develop MEL systems that flexibly and adaptively meet our collective post-Paris needs.

Join the Devex community and access more in-depth analysis, breaking news and business advice — and a host of other services — on international development, humanitarian aid and global health.

About the authors

Sarah standley headshot 2
Sarah Standley

Sarah Standley joined Itad as a consultant in 2013. She has extensive experience working on the dual relationship between sustainable development and climate change. She has a consultancy background, and now specializes in the use of evidence and learning to inform policy and practice in international development cooperation.


Robbie resize ed
Robbie Gregorowski

Robbie Gregorowski is an associate director at Itad where he has been leading the climate resilience portfolio since 2009. He is currently leading the M&E for the £150 million DFID Building Resilience and Adaptation for Climate Extremes and Disaster (BRACED) programme. Robbie has considerable expertise in monitoring and evaluation, particularly in the fields of climate change resilience, evidence-informed policy, and organisational assessment.


Oped davewilson ed
Dave Wilson

Dave Wilson is a senior consultant at Itad and works in the climate change theme. He has 10 years of experience designing, managing and delivering environmental and climate change projects in the UK and overseas, and has significant experience delivering multi-country research projects including development of robust methodology protocols and monitoring and evaluation frameworks.


Join the Discussion