Let's join forces on a bottom-up approach to aid transparency: Lessons from AidRating's experience in Switzerland

Beneficiaries of Swiss Foundation for Technical Cooperation's vocational training program in Dhaka, Bangladesh. Swisscontact is one of the Swiss agencies covered by AidRating's evaluations. Photo by: Swisscontact

At AidRating, we have been interested in aid transparency for a number of years. The outset was the reasoning which I outlined in my recent Full Disclosure blog contribution entitled ”What Full Aid Transparency Could Deliver,” in other words: To design a concept that allows us to analyze aspects of aid interventions in relation to their setting, to estimate their quantitative significance through stratified sampling, and – to make it complete - to allow full-fledged comparisons by applying an universally applicable evaluation template.

At the time of completion and trial-running the evaluation concept, we found that in Switzerland, the project information publicly available was far from allowing even a modest guess as to what actually was done. At the time (this was in 2007), this experience motivated our team to design an advocacy-driven approach: Motivate aid agencies to improve their reporting by applying competitive pressure.

We did this as follows: First, we identified the 10 largest private aid agencies. From each of these, we then established the total number of projects they had, and the total number of projects on which we could find information. This provided the first key indicator: A number in percent showing the coverage of project activities on which there was public reporting. (We could not apply the same concept on the basis of budget percentage, as most Swiss agencies do not disclose their project budgets.)

Next, we collected all information on projects we identified. Of these, we took a random sample of eight projects. In each, we charted the information provided along 10 “key questions,” namely: project setting; beneficiary groups; project objective; lead agency; main activities; risk elements; start date and planned duration; output or impact indicators; sustainability aspects; expenditure from start and annual expenditure. The results obtained were rated along a 5-step scale going from 0 or “no information” to 4 or “full information,” the latter being equivalent to 100 percent. Resulting percent values per project and the ones for all eight in the sample and SD were recorded, along with the overall average. This provided a figure indicating the quality of project information, or “reporting depth.” The scores and the overall figure were recorded in percent.

The two figures could then be compounded, providing an overall “transparency score.” Each of the agencies was then confronted with these results and some additional questions with a possibility to clarify or rectify misunderstandings et cetera. After including the outcome of this consultation, the results were published in a ranking with comments. (Read an example of our methodology and results.)

The first such ranking was published at the end of 2008. Two further annual rankings have been conducted and published in 2009 and 2010.

In the beginning, the response was one of disbelief. Some claimed that aid agencies cannot be compared. Others claimed that the public was not interested in such data, that it was too much to specify each of their projects, or that our methodology was not serious. However, our data were busily downloaded from our site, and our printed report was sold out. In the later exercises, it seemed that some had taken a slightly friendlier stance. The ones which had scored poorly, however, later took a position stating that they would not wish to participate because they “had not been involved in the design of the study from the beginning.”

From within the agencies, on the other hand, we got positive feedback. The exercise was considered “useful” and “necessary.” Also, we saw significant changes in reporting on homepages and in annual reports and publications, much in the desired direction. One of the agencies publicised some of the correspondence, another is using the rating for its publicity. We have now gained a routine of annual rating exercises. This year, we plan to conduct our rating in the established way, add an example of impact analysis, and hope that we can afford to attend the Busan conference and present our results there.

The latest response to the International Aid Transparency Initiative we received here in Switzerland seems to suggest that although officially the standard is upheld by authorities, resistance against its application is strong. I come to think that an approach from both sides will be needed: The top-down approach of standards agreed upon by government entities, for example via IATI, and the bottom-up approach to press for better transparency out of civil society, as we at AidRating are trying to advocate. To broaden our collective reach, we would like to see a network forming.

We wonder if anyone would be interested to join?

About the author