BRUSSELS — If Dan Honig were put in charge of the United States’ aid agency tomorrow, the assistant professor of international development at John Hopkins University knows exactly what he would tell President Donald Trump.
“Part of managing performance is giving people the space to perform or to fail,” Honig would tell the president. “And what we’ve done inside the U.S. Agency for International Development is write so many rules and regulations that people are following the rules rather than getting to the point.”
Honig’s new book, “Navigation by Judgment,” argues that the skills and local knowledge of aid workers in countries are often more important to the true success of a project than the pre-set targets imposed by headquarters.
“Data is an input but it’s not the answer and we need to start treating it appropriately,” Honig continued to an imaginary Trump at a Brussels cafe recently, “as a way we learn about what we’re doing, rather than the answer to whether or not we’ve succeeded.”
More on field staff and development:
Informed by his experiences in countries such as Liberia, Thailand, East Timor, and Israel, Honig started with a simple hypothesis: Reporting requirements too often get in the way, and people on the ground should be given more autonomy to make decisions — and if necessary, mistakes — in order to learn how to do better.
Now, after building a database of 14,000 projects spanning 40 years, 180 countries, and nine agencies in each category of aid recognized by the Organisation for Economic Co-operation and Development, Honig said he has the empirical evidence to prove it.
“The answer isn’t that we should always empower field agents as much as we can,” Honig said during a recent lecture in Brussels. “It’s horses for courses — appropriate management practice for appropriate projects.”
He found that as environments become more unpredictable, agencies that empower local staff generally are able to cope better. More top-down control means more oversight, more standardized behavior, and potentially more extrinsic motivation through things such as performance pay. But he also warned of distortions as people do more of what is measured and ignore what isn’t, even when the latter — such as gauging a country’s political environment — is vital to a project’s success.
“If what we are doing is collecting data that’s driving our projects in the wrong direction, then our sense of accountability based on those numbers was pyrrhic,” he said. “It seemed like we were getting results, but we weren’t.”
For Honig, “navigation by judgment” means more staff initiative, more information being gathered and used that is difficult to quantify, and more flexibility. But it also involves more potential for errors in the form of “wrong judgments or bad actions.” To those worried about foreign aid’s reputation being hurt by stories of fraud or malpractice, Honig says the sector needs to get over its glass jaw and give the public and politicians more credit.
“When we treat the public like they can’t understand what the job is, and all they can do is respond to flat pictures of tragedy and give money, I think we make the bed we then lie in, to some extent,” he said.
“If the central bank changes the interest rate and it turns out that was the wrong idea, we criticize the central bank for being wrong, we don’t say, ‘we shouldn’t have an [independent] central bank anymore.’”
That’s because, just as with surgeons and police, Honig said, “we recognize this is a task that requires judgment.”
Preparing a test
Drawing on 1 of 8 qualitative case studies in the book, Honig contrasted the work done by USAID and the United Kingdom’s development arm, the Department for International Development, in the same part of South Africa at the same time. Both agencies wanted to improve municipal governance — by training locals in better accounting practices for instance — yet both took different approaches.
USAID organized workshops by consultants in communities. DFID put advisers in the field, relocating their people — again mostly consultants — to live for one to two years in the municipality. For USAID the goal was simple: Training people.
Whereas “For DFID, [the goals] were a lot fuzzier,” Honig said. “[Improved] planning and budgeting, unblocking delivery obstacles, and achieving institutional coherence.”
USAID headquarters knew how many trainers went where, how far they drove, how much was spent on food, who attended the trainings, and even the dimensions of the classrooms where the sessions took place. What couldn’t be shown, however, Honig said, was “whether anyone learned anything and whether they did anything with that learning, because those are much harder to verify, much harder to count.”
“DFID said ‘you figure out what the plan is, you execute the plan, then you tell me how you did.’”—
Meanwhile, DFID had scarcely any record of their presence in South Africa other than the reports its advisers wrote at the beginning and end of their stay.
“I don’t tell my students, ‘why don’t you go and prepare the test you think is appropriate, then take the test, then mark your exam and tell me how you did,’” Honig said. “That’s essentially what DFID did here. DFID said ‘you figure out what the plan is, you execute the plan, then you tell me how you did.’”
The result? The deputy head of the USAID project and those who delivered the trainings, admitted sadly to Honig that they might not have achieved much at all. Without enough emphasis in the reporting requirements on what happened at the training itself, the implementers “just had to show up, take attendance and then move on,” Honig recounted.
For DFID, some municipalities didn’t register much improvement, but others were found to be success stories. Honig said the key was allowing consultants to make judgment calls on how their work should be tweaked, leading to people doing different things in different places at the same time.
For Honig, the USAID project represents a design failure.
“People talked about having indicators chosen because they were easier to count,” he said. “But those numbers don’t tell about impact.”
Honig sat down with Devex to discuss his findings, and what else he would tell the US president.
This conversation has been edited for length and clarity.
Should you become the head of USAID tomorrow, what would you tell Trump about how to fix the problems described in your book?
‘We are counting cents rather than making sure every cent counts — and what I hope to do Mr. President — is bring in more flexibility, more opportunities for my staff to do what they want to do, and evaluate that.
What I’m suggesting is, that you give me license to change the authorizing environment, to change the incentives staff respond to in a few countries for a few years. And let’s see how that works. I agree with you when you say — as you did often on the campaign trail — that a lot of foreign aid is wasted, and I think one way to improve that situation is to change fundamentally how we think about delivering it.
We’re gonna make small bets and see how things work, and we’re going to adjust. We might end up spending $3 million but we’re going to do it by spending $250,000, 12 times — I’ll be honest with you Mr. President, I don’t think we’re wrong, but we could be.
However, if I’m right, that means we’re not wasting $3 million, we’re wasting $300 billion. I think $3 million is a worthy price to pay to find out about whether we’re wrong about what we do with hundreds of billions.’
For those small bets, we want projects where we can assess what’s going on quite frequently. Municipal governance isn’t a bad choice because you’ve got lots of municipalities and people work differently in different places — so you can give them different sets of rules. Supporting a variety of local health clinics works the same way. We want to exploit some natural variation — as an academic would say — and do things differently in some places than we do in others.
What’s an example of where important information was not gathered because it wasn’t relevant to the reporting criteria?
There was a project in East Timor to help agricultural extension agents improve the services they were offering to farmers. The project trained the agents, and by doing so, it made them more valuable on the open market and almost all of them got jobs outside the public sector. The people on the ground knew what was happening — East Timor’s not a big place. They wanted to change the project but changing the project meant going back to headquarters and demonstrating, by doing so, that their current model wasn’t working, so it was in no one’s career interest. So, the consultants implementing the project stopped paying attention to what was happening after they completed the training, and this, in turn, appeared to influence how much effort they put into the trainings and the project as a whole.
Political economy analysis could also fit here. For example, ‘I think the deputy minister of this project is falling out of favor, the new deputy minister is likely to be x.’ I need to talk to a lot of people to get a sense of that, and it also depends on being able to use that information to change the positioning of the project before anything verifiable has happened. But if the project can’t be changed, then why bother?
When is top-down management to be preferred?
I do have one case where USAID is delivering anti-retroviral drugs through the President’s Emergency Program for AIDS Relief, or PEPFAR, to pregnant South African women, focusing on prevention of mother-to-child transmission of HIV. They count everything and they incentivize people to put the drugs in the hands of women while they are pregnant and train them to use these drugs. For tasks like that, the measurement regime does much better than a kind of fuzzy, policy, navigation-by-judgment regime. The numbers allow us all to fix our attention on the thing that we need to deliver and it allows the government, in this case, to be held to account by donors.
Is an over reliance on consultants partly to blame for the problems you describe?
The current organizational structure of consulting is the problem, rather than the consultants. A lot of people who work as consultants — including myself — want to do good things, they want to pay the rent, but they also believe it can be valuable work.
But they then have to manage according to the targets, even when both they and the people they are reporting to, think the targets don’t make sense. It’s often a conversation between the consultant and the manager where both know they are playing a kind of kabuki theatre game, with numbers and targets and deliverables, but that’s the way the contract is written.
I think there would be an argument for bringing those people in as staff and extending the bounds of the agency because it would allow us more long-term time horizons, more flexibility, more relational contracting — and less time fixing things.