
The advent of new concepts like the International Aid Transparency Initiative to record detailed operational aid project information is good news for those of us who have been asking for years for something like this to happen. Not disposing of a broad experience basis is one of the main obstacles that hinder learning for better aid concepts. As longtime observers of the scene, we expect a big leap in concept development once the improved information flow materializes at a large scale.
To be able to evaluate and compare not one, but a large number of operations used to be a dream. If a toolbox could be found which puts the right things in the right places, this could give a completely new meaning to the term “evaluation.” Comparisons and conclusions of unseen value would become possible. A condition is that a format could be found which is able to absorb the data flood we can expect and allow to “evaluate” it. Let us therefore examine what requirements such an evaluation format would need to fulfil.
Universal applicability and comparability
The typical evaluation is an in-depth analysis of a project or program, followed by conclusions and recommendations. Usually, the focus is defined by the project design and by specific job lists (questions) asked by the project designers. Not provided, but needed if lessons are to be learnt beyond each individual case, are two things: Broad, if possible universal, applicability of the questions asked, and comparability of results.
Underlying qualitative criteria
Any intervention is intended to bring some kind of improvement in a given development setting. This setting and its future or at least well-based assumptions on it must be known as a baseline for any properly planned intervention. To be comparable, the system boundaries must be wide enough to take care of any conceivable setting, operational mode, and outcome. This includes naturally all criteria that describe the setting of the human population and of target groups that may have been singled out in view of their economic, social, and cultural situation and perspective. The geographic and environmental setting must also be looked at, as they are key features.
The concept would have to characterize all important development issues comprehensively, but, at the same time, be simple enough to be manageable.
Dimension
A basic element for all comparison is dimension, be it numbers or size. Unless we know how large an intervention is, we cannot appreciate it. Do we want to improve the lives of 100 families, or of a million? Is the area which should benefit from our project one village, or a whole district, region, or country? Without having a measurement, we cannot distinguish fleas from elephants.
Time vector
The timeline and point at which a project is reviewed is essential to appreciate results. At the beginning, even the best plan is nothing but a plan. It takes time for intended results to become visible. No matter what intervention is looked at, the point in time this happens must be part of the concept.
The role of “management”
Form follows function. Management should follow the needs of given objectives in a given environment. “Management” in this sense includes everything from the first fact-finding mission till handing over of the project in the end. Management must ensure that resources are used effectively, deadlines met, results are firm, and that negative side effects are avoided.
Management must ensure that objectives are met effectively and efficiently from planning till handover. The shape this takes depends on the given setting. Management performance should express optimum adaptation to setting and objectives, and not dominate these.
Data quality and traceability
Understanding a complex and large project in a difficult environment often suffers from lack of information on the many questions that need answers. Sooner or later, there is a practical limit to the amount of data available to make an informed judgement. It is therefore necessary to allow for data to be used if they are there, but also to allow assumptions where the former are in short supply. A tagging and tracing system for data origin and quality should be in place which allows to verify and improve where possible.
Continuous improvement
A “universal evaluation standard” should have the in-built capability of allowing to adapt to future improvements and developments without having to greatly modify or drop older experiences for lack of compatibility. This is best done if the criteria are broad enough to accommodate new findings within them rather than having to add new ones. The criteria list, once again, is therefore essential and will necessarily have to be wide.
It seems a daunting task to find and define a standard which satisfies all these requirements. We of AidRating think that it is a worthwhile effort to try and see whether this is possible, and how such requirements could be met in practice.
Because of this, we have attempted to design an evaluation framework which is capable to fulfil all these requirements. We have found one and would like to present it for discussion and possible further development. We accept that there may be other good ideas and invite experts to join a broad discussion on application and possible improvements. A coordination effort with the data definitions of the most advanced reporting, namely ofIATI, might also be useful.
This is an abridged version of a longer article. It can be found under ”Universal Evaluation Standard.” An overview and practical example of use can be downloaded from the AidRating homepage.
Information used from the database has a strong field and results focus, which should make them particularly interesting for civil society organizations in partner countries. The information needed according to the AidRating concept to understand a given project can be downloaded here.
For comments and questions, please contact Jan Stiefel, Project Head AidRating, at jan.stiefel@ideas-expert.ch. He will attend the Busan Civil Society Forum workshops Nov. 26-27, 2011, and main events at the Fourth High Level Forum on Aid Effectiveness in November-December 2011.
Read more of Full Disclosure: The aid transparency blog, written by aid workers for aid workers.