Do your M&E systems stack up to disco-era standards?

By Michael Klein 04 October 2016

Akabondo Mainais (left), a monitoring and evaluation officer for Climate Investment Fund projects in western Zambia talks to members of the Nalunau family. Photo by: CIF / CC BY-NC-ND

When talking monitoring and evaluation, it’s hard to ignore the fact that we’ve made little progress over the past 40 years. Indeed, We’re still trying to catch up with the original pioneers in the field.

Imagine where we could be today if our industry had adopted the following sage advice when it was first issued, almost half a century ago: “Build an information system into [your] project so that the necessary data [can] be collected in the course of regular project operations. Such a system can provide timely, relevant information that can be used by decision-makers throughout the course of the project.”

This quote is pulled from a 1979 report authored for the U.S. Agency for International Development titled “The Logical Framework.” It draws on research — begun in the 1960s by Lawrence Posner and Leon Rosenberg — on the use of logical frameworks for program management, a framework widely employed in development today. However, that’s about the extent to which we’ve adopted the recommendations outlined in the authors’ research, which include taking a holistic approach to collecting and using data.

The 1979 report includes observations and recommendations that we still struggle with today. As an example, below are key takeaways regarding problems with USAID’s project monitoring and evaluation system as of 1969 (taken verbatim from “The Logical Framework”):

1. Planning.

 Objectives were multiple and not clearly related to project activities. There was no clear picture of what the project would look like if it were successful. Thus, evaluators could not compare — in an objective manner — what was planned with what actually happened.

2. Management responsibility was unclear.

Project managers were committed to the fact that projects must be justified in terms of their ultimate late benefits (“impact”) yet were reluctant to be considered responsible for impact; there were too many important factors outside their control. They found it difficult to articulate what they should be responsible for, and ended up not accepting any responsibility for results.

3. Evaluation was an adversary process. 

With the absence of clear targets and frequent disagreements (even among project team members) as to just what the project was about, evaluators ended up using their own judgment as to what they thought were “good things” and “bad things.” The subsequent evaluation results would then frequently become a basis for further argument about what was good or bad, rather than resulting in constructive actions for project improvement.

Does any of this sound eerily familiar?

Sitting at my desk in 2016, I am awestruck by how pressing these problems continue to be — and how relevant a paper published in 1979 still appears. How do we go about fulfilling this vision for a more holistic approach to M&E?

The good news is that we have made progress, albeit in a piecemeal fashion. There are a bevy of tools available for organizations to address the issues highlighted by Posner and Rosenberg — not just tools for helping logically map a particular intervention but also to put in place the type of “information system” that the authors first recommended during the Nixon administration. Better yet, these tools no longer require the use of punch cards for data entry and are increasingly accessible to the non-techies among us.

The bad news is that development organizations still face many of the organizational challenges the authors discussed. So while we’re making great strides toward better M&E and the development of systems to support our work, there’s still much work to be done in integrating  those systems and processes into our organizations.

How do we bridge the gap?

The first step is instilling an appreciation for data and results-based management into our

organizations. Thankfully, I believe we’re well on our way in this regard (albeit a few decades late), and the movement toward better data and information systems is inevitable. Living in the information age has forced our hand; there’s no turning back.  

Next, we need to ensure the data we’re collecting is used and useful. This requires acknowledging that we’re past the point of talking about technology as an optional extra for organizations. This holds true for the full spectrum of development actors — from large U.N. agencies to small NGOs. Again, stealing from Posner and Rosenberg, we need to be investing in information systems “[that] provide timely, relevant information that can be used by decision-makers throughout the course of [our] projects”.

There’s no shortage of resources to assist in fulfilling this long-unachieved vision. Tools include online classes (such as those offered by TechChange), conferences (MERL Tech and ICT4D) and guides and discussion papers on the issue (NetHope’s Back-Office IT Guide, Bond’s Investing in MEL Guide, and Rockefeller’s Monitoring and Evaluation in a Tech-Enabled World).

How you chose to engage is up to you. That said, since they are approaching their ruby anniversary, I think we can all agree it’s time to bring Posner and Rosenberg’s recommendations to light and get serious about past failings and our future success.

Join the Devex community and access more in-depth analysis, breaking news and business advice — and a host of other services — on international development, humanitarian aid and global health.

About the author

Michael klein profile
Michael Klein

Michael Klein is a director of International Solutions Group, a company that works with governments, U.N. agencies, international organizations, NGOs and other companies to improve the implementation of humanitarian aid and development programming projects. Klein is based in Washington, D.C, and is a member of ALNAP, Washington Evaluators and the American Evaluation Association.


Join the Discussion