In post-conflict zones like Afghanistan, rapid turnover among development personnel hinders effective learning. KPMG's Trevor Davies explains why the development community needs to share knowledge, admit mistakes and course correct when things go wrong. Photo: Poland Ministry of Foreign Affairs, (CC BY-NC 2.0)

I’ve spent more than 20 years in international development, spending most of that time in fragile or post-conflict states. In that time, I’ve come to see a disconnect between outcomes and intentions.

We all want to ensure the $250 billion committed each year to international development is spent wisely. Yet we do not learn enough, or learn quickly enough, from the work that’s already happening – or happened – on the ground.

This knowledge-management gap means we are constantly missing opportunities to improve the quality of our work and its impact on the world. Several factors contribute to this gap.  

First, the cycle of evaluation is too slow, and its findings are often out of date or not relevant when they do arrive.

Consider the typical project cycle.  Most donors – following the Paris and Busan declarations on aid effectiveness – want to work more closely with the recipient governments and ensure that projects are country-led. At the same time, donors need to ensure these programs meet their own standards and fit within their own agendas.  

As a result, project design phase is an iterative process, and it takes time, in some cases, up to one or two years – or even longer.

With the design phase complete, the donor can put in place a baseline, against which progress can be measured. In the meantime, they tender for a supplier or implementer, which can take anywhere from three to 12 months, depending on how the contract is structured. Finally, the implementer gets the project underway. Let’s say it continues for five years. When it’s complete, a post-project evaluation is completed.

In total, that’s an eight-year cycle. At what point are you learning lessons and feeding those lessons back into the project? If you begin a project in 2013, you would not learn lessons from it until 2021.

So the results are slow to arrive, and when they do, they are considered out of date. A donor will say: “Well, yes, that project was flawed, but we don’t use that framework anymore.”

And this is the second factor contributing to the knowledge-management gap: Our rapid jumping from one development concept to the next.

In the two decades I’ve been involved in development, I’ve seen so many approaches and methodologies. At one time, the new idea was to have stand-alone programs. At another time, it was to have a holistic approach. Then it was spot interventions, then civil society, etc. These ideas have been state-of-the-art thinking at the time; guidance notes have gone out, projects have been designed around them.

In many ways, this rapid evolution in thinking reflects our growing understanding of what works and what doesn’t. But in another sense, we are simply coming up with new ideas to engage constituencies – whether those are taxpayers or lawmakers or check-writing philanthropists. We are finding new ways to tell the same story.

Whatever the cause of this rapid cycling of ideas, the result is that lessons developed over the years through post-project evaluations become much less useful, because they are considered the results of project designs no longer considered valid.

The third factor in our knowledge-management gap is caused by personnel changes at the field level. In many development agencies, people rotate through positions quite rapidly – especially in fragile or post-conflict regions.

Take Afghanistan, where development agencies might post someone for one year. Because that person may take personal leave from the country for at least some of that year, they might be in station only 40 weeks out of the year. Multiply that across an office, and it’s rare that you have a full complement of staff at any one time.

And then, because these are risky environments, you get certain types of people who volunteer for these assignments. Older people with more experience, who often have families, prefer to work at headquarters, so you get younger, less experienced staff. These are complex environments, where the experience and political nuance are needed. A few people are very committed and simply go from country to country, but for many, they go to the field early in their career, and then go back to the head office.

The result is you may have a project conceptualized by one person, designed by a second, commissioned by a third, monitored by a fourth, and evaluated by a fifth. The question is: How can we effectively learn from this project if there is little sense of ownership and continuity?

To bridge this knowledge-management gap, I believe we need to ramp up in-project monitoring and real-time impact evaluation, so that we can learn lessons today and then feed those lessons back into the project.

Of course everyone will say they are doing this type of monitoring, but often it’s just an exercise in ticking a box.  It’s not as exciting or sexy as doing the new quasi-scientific things like randomized controlled trials, and long-term evaluation – both of which have a role to play – but it does give you the opportunity to course correct.

It’s a bit like the Titanic: Do you want to course correct and miss the iceberg, or do you want to set up a comprehensive review of why it hit the iceberg and what lessons you could learn? Both are valuable approaches but I know which ship I would rather be on.

And with the economic crisis right now putting pressure on aid budgets, we’ve got to make sure that out of every dollar we spend, we are getting the biggest development impact. If something is not working, we need to be able to identify what is not working, take corrective action while the project is on-going and get back on track.


Some donors are doing in-project evaluation, and doing it well, including the Millennium Challenge Corporation and the U.K.’s Department for International Development.

There’s always a tendency to try and polish things, to get methodologies to 100 percent. In fact, what is important is to learn from what we’re already doing so we can do it better.

KPMG is walking that walk of knowledge management through our new International Development Center of Excellence. Because KPMG is organized around clients, we have teams within KPMG working with key stakeholders in the development community around the world including AusAID, the Asian Development Bank, DfID, SIDA, the United Nations, the World Bank, and many philanthropic organizations.

As you can imagine, this means we may be working on very similar issues, which makes global collaboration and knowledge sharing even more critical as a factor in the success of projects.  Through the Center, we aim to bring those insights together, so we can improve development practices.

It’s a small step, and we have a long way to go. But knowledge management is too important not to get right. Because at the end of the day, development is not about coming up with a smart methodology or writing a clever report. It’s about changing people’s lives, they cannot afford to wait for results and neither can the development community.   

Explore related content:

Join Devex, the largest online community for international development, to network with peers, discover talent and forge new partnerships - it’s free! Then sign up for the Devex Impact newsletter to receive cutting-edge news and analysis every month on the intersection of business and development.

The views in this opinion piece do not necessarily reflect Devex's editorial views.

About the author