The dangers of AI in global development
The opportunities for artificial intelligence to enhance development work are plentiful — but should the sector be worried about the pitfalls? We asked the experts.
By Rebecca L. Root // 11 January 2024From privacy breaches to bias to job replacement to increased cyber threats, the risks of artificial intelligence have been widely touted. These pose threats across any industry, but experts say extra care must be taken in the development sector. “AI's potential in development is vast, but it's a double-edged sword. Without careful consideration of questions of equity informing how new tools are designed and deployed, AI can amplify existing harms and create new ones,” Vilas Dhar, president of the Patrick J. McGovern Foundation, which explores how advances in information and technology can improve quality of life, told Devex. Already the foundation, like many aid organizations, is using AI to do disaster damage assessments, predict local weather patterns for agricultural insight, tackle pollution, detect disease patterns, and more. But Chinasa T. Okolo, a fellow at the Center for Technology Innovation at the Brookings Institution, worries that “Some organizations may be pushing AI development without understanding the respective implications, both good and bad.” Devex asked the experts what the negative implications might be and how development professionals can mitigate them. The challenge of biased algorithms One of the biggest risks is that AI systems are built with data that can lack representation, Dhar said in an email. Just like with people, AI can then exhibit bias, affecting already marginalized communities. “For example, we learned with our partners, Instituto Protea, that AI-enabled breast cancer scans were biased to the global north, rendering the tool less effective when deployed in Brazil,” Dhar said. The original prediction model was trained on global north data from well-resourced medical systems and could fail when applied to populations with lower access to diagnostic equipment, or where there are racial and geo-social differences in how diseases present. Even if the data were good, it rarely tells the whole story. In Jordan, Human Rights Watch claimed that a World Bank cash transfer program that used AI to estimate the economic vulnerability of potential recipients had inadvertently excluded people who needed support. “The problem is not merely that the algorithm relies on inaccurate and unreliable data about people’s finances,” a report concluded. “Its formula also flattens the economic complexity of people’s lives into a crude ranking that pits one household against another, fueling social tension and perceptions of unfairness.” “If we maintain an ethical approach, we can continue to advance social change in a responsible, AI-enabled, but human-centric way.” --— Vilas Dhar, president, Patrick J. McGovern Foundation AI systems lack empathy, context, and history, experts said. The risk in using them is potentially “not seeing the truth and the reality for what it is,” said Steffen Kølbek, global head of IT at the Danish Refugee Council. That can be damaging as organizations assess who needs what support. As a result of these issues, a USAID spokesperson told Devex that many AI tools just aren't relevant enough yet for the communities in which the agency operates. At least part of the solution is for development groups, where possible, to curate diverse data sets themselves, build transparent algorithms, and perform impact assessments to protect against unintended biases. Experts also suggested actively engaging intended recipients in the design process. For example, Hunter Cherwek, vice president of clinical services at Orbis International, said the NGO trained its algorithm for diabetic retinopathy research on people from the demographic it serves in sub-Saharan Africa, the Middle East, and South Asia. “Having that data diversity was a strength and something that we thought about from an ethical standpoint and an equity standpoint before we even started trials,” he said. Even with such safeguards, staff members must evaluate what comes out of AI models and examine whether its bias has the potential to create inequities, said Nora Lindstrom, senior director of Catholic Relief Services’ information and communications technology for development, or ICT4D. The focus should be data-informed rather than data-driven decision-making, she said: “We always need to look at it critically and not just take it at face value.” The security element Then there are data protection and privacy concerns. “A large focus when we deploy AI solutions is what data travels in and out of our organization,” said Kølbek, because of how data might be used if accessed by others. For example, in a conflict context, information about the location of displaced individuals could put them and the aid they’re receiving at risk. With the AI regulatory environment still evolving, the Brookings Institution’s Okolo said it is up to each organization to carefully consider deployment. She pointed smaller NGOs toward the U.S. Agency for International Development’s AI Action Plan to see how others in the space are approaching AI. The U.N. Inter-Agency Working Group on AI has also developed recommendations on AI ethical standards. As a starting point, experts said organizations should ask whether AI is even necessary. “A lot of times people go to a solution like AI because it's shiny,” said Cherwek, rather than consider whether it will solve a specific problem. This is why he believes having an intervention go through ethical review boards and trials is important. “While those things are time-consuming and demanding, it does have outside parties think about how you're going to deploy a technology,” he said. Other risks interviewees listed included the potential to exacerbate the digital divide if AI solutions are rolled out in places where technology or internet access isn’t always available — Okolo recommended organizations consider solutions that don’t require constant connectivity — and the possibility of increased cyberattacks. Building expertise The risks can be mitigated, but the sector needs more capacity, Dhar said. “AI development and implementation cannot be left to technology departments. It needs to be a core part of executive strategy and planning and thoughtfully incorporated into the day-to-day operations of staff at all levels,” he said. Similarly, all staff need to understand that content from an AI solution may contain bias, be oversimplified, and lack certain contexts, Kølbek said. With that in mind, DRC has run a series of webinars on the topic for its staff and developed guidance. Academic institutions, including the Patrick J. McGovern Foundation, also offer programs and research insights to advance nonprofit AI capacity. If organizations can’t build AI capacity in-house, they should contract it in, experts said — ensuring that the contractor also understands the context of a development project. “If we maintain an ethical approach, we can continue to advance social change in a responsible, AI-enabled, but human-centric way,” Dhar said.
From privacy breaches to bias to job replacement to increased cyber threats, the risks of artificial intelligence have been widely touted. These pose threats across any industry, but experts say extra care must be taken in the development sector.
“AI's potential in development is vast, but it's a double-edged sword. Without careful consideration of questions of equity informing how new tools are designed and deployed, AI can amplify existing harms and create new ones,” Vilas Dhar, president of the Patrick J. McGovern Foundation, which explores how advances in information and technology can improve quality of life, told Devex.
Already the foundation, like many aid organizations, is using AI to do disaster damage assessments, predict local weather patterns for agricultural insight, tackle pollution, detect disease patterns, and more.
This story is forDevex Promembers
Unlock this story now with a 15-day free trial of Devex Pro.
With a Devex Pro subscription you'll get access to deeper analysis and exclusive insights from our reporters and analysts.
Start my free trialRequest a group subscription Printing articles to share with others is a breach of our terms and conditions and copyright policy. Please use the sharing options on the left side of the article. Devex Pro members may share up to 10 articles per month using the Pro share tool ( ).
Rebecca L. Root is a freelance reporter for Devex based in Bangkok. Previously senior associate & reporter, she produced news stories, video, and podcasts as well as partnership content. She has a background in finance, travel, and global development journalism and has written for a variety of publications while living and working in Bangkok, New York, London, and Barcelona.