Can donors support AI for global development that does no harm?

Photo by: geralt / Pixabay / CC0

SAN FRANCISCO — Artificial intelligence has significant potential to address a wide range of international development challenges using computer vision, natural language processing, content generation, and robotics.

Despite this promise, emerging AI technologies also pose major challenges, and development practitioners who see the potential of AI in international development are increasingly asking what responsible use looks like and exploring how they can mitigate the risks.

5 ways global development professionals can support better practices in AI

Amid a debate on the ethics surrounding data misuse, some are arguing that it’s equally important for development professionals to weigh in on AI. Here's how.

“AI is unfortunately not a standardized ‘plug and play’ tool you can simply apply at different points of the program cycle,” reads a new report on AI for international development, published by Results for Development. “And the same reasons that make it attractive to development actors also render it a potential minefield when considering their commitment to ‘do no harm.’”

Development practitioners need to own and be accountable for AI development and deployment, while also working with partners in low- and middle-income countries to take greater control over their own data and develop solutions locally, the report explains. The report — “Artificial Intelligence in International Development: A Working Paper” — was developed in partnership with the International Development Innovation Alliance, a collaboration platform for donors that support innovation in international development. In January 2019, IDIA launched a new working group on AI to provide a forum for members to collaborate around the responsible deployment of these technologies.

“AI is only as good as the data that it is based on.”

— “Artificial Intelligence in International Development: A Working Paper” report

Now, as members of IDIA support a range of projects leveraging AI for the Sustainable Development Goals, they are also starting to take on some of the challenges these technologies present — from bias and discrimination to accountability and transparency.

Forming partnerships across sectors

The U.K. Department for International Development, like many of its peers in IDIA, has supported AI work in sectors including health, education, and agriculture.

DFID is particularly drawn to the possibilities AI offers in global health, including machine-vision driven diagnosis, data-driven triage, and predictive modeling of epidemics, Zack Rubens, policy manager for emerging policy, innovation and capability at DFID, told Devex via email.

But DFID relies on partnerships with groups that have more AI expertise to navigate what responsible use looks like. For example, DFID has partnered with the U.K. Office for National Statistics to use its data science and AI expertise to tackle the SDGs, build partner capacity, and improve international aid, Rubens explained.

Alex Jones, head of emerging futures and technology at DFID, said there is an urgent need for donors to understand not only the benefits but also the risks of AI in international development.

“Burying our heads in the sand isn’t going to make the difficult issues around the use of AI go away, so there is a compelling case to engage,” Jones said. “The challenge for organizations such as DFID is on how to do this in a way which increases inclusion and maximises the benefits for the world’s poorest whilst navigating the risks.”

The new IDIA working group provides a unique forum for DFID, USAID, OECD, and others to consider the ethics of AI for international development, learn what others have done to advance collaboration in this space, and discuss ways to make the move from principles to practice.

Improving the quality and relevance of data

The Results for Development report outlines the challenge of data availability, quality, and accessibility.

“AI is only as good as the data that it is based on — and in the development world, challenges around the accessibility, quality, depth, diversity, and volume of data are particularly common,” the authors explain.

The Organization for Economic Cooperation and Development points to one-way donors can improve the relevance of existing development data, by using AI to monitor progress in SDG financing across its members.

Innovation at Rockefeller Foundation: Data science for social good

“Everyone talks about the risk of using big data and artificial intelligence ... but there is an equally big risk of not using big data and AI,” Tariq Khokhar, managing director and senior data scientist at the foundation, tells Devex.

Development organizations interested in leveraging AI need to begin by asking whether it is even the optimal solution to the problems they face, said James Berryhill, innovation specialist at OECD’s Observatory of Public Sector Innovation, which works to help public sector innovators explore AI and other technologies.

“Right now, we’re in a time when many of the leading contributions are at the ‘strategies’ and ‘principles’ level,” he wrote in an email to Devex. “Strategies and principles are fantastic in that they help to align people under common aims. However, they aren’t really sufficient in actually designing and deploying AI on the ground. We are just now seeing how these principles can be fully actualized.”

Building capacity in the sector and globally

The report also highlights how a lack of capacity to engage with or use AI poses a threat to the future of AI for international development: “Globally, there is a shortage of knowledge and skills in the field of AI, as well as a concentration of AI expertise in the hands of a select few (white males based in countries in the Global North) which is limiting its spread across more geographically and ethnically diverse groups,” the report reads.

There is a need to develop artificial intelligence, machine learning, and data science capacity both within the development sector and in LMICs, said Aubra Anthony, strategy and research lead at the USAID Center for Digital Development.

"If we care about how these tools affect people and their environment, we need to engage people in the creation of the tools themselves,” she said. "Those who are invested in solving a particular problem who want to use AI or ML [machine learning] to solve that problem really need to feel empowered to engage not just with the problem but also with the solution that someone else might be developing.”

The agency has shared some of its own thinking around the peril and promise of artificial intelligence through reports such as “Reflecting the Past, Shaping the Future: Making AI Work for International Development” and “Considerations for Using Data Responsibly at USAID,” which helped inform the Results for Development report.

It is critical that as donors explore the promise of artificial intelligence, they remain fully aware of the ramifications of its muse, and stay engaged in shaping the technology in order to avoid those negative outcomes, Anthony said.

“There's a lot of hype around AI in development, and some of it is very merited and absolutely worth exploring. But what is challenging is striking the right balance between enthusiasm and excitement and a healthy dose of skepticism. That’s one of the reasons why I’m excited about how IDIA has approached this,” she said.

Update, August 2, 2019: This story has been updated to clarify the work USAID has done on AI.

About the author

  • Catherine Cheney

    Catherine Cheney is a Senior Reporter for Devex. She covers the West Coast of the U.S., focusing on the role of technology, innovation, and philanthropy in achieving the Sustainable Development Goals. And she frequently represents Devex as a speaker and moderator. Prior to joining Devex, Catherine earned her bachelor’s and master’s degrees from Yale University, worked as a web producer for POLITICO and reporter for World Politics Review, and helped to launch NationSwell. Catherine has reported domestically and internationally for outlets including The Atlantic and the Washington Post. Catherine also works for the Solutions Journalism Network, a non profit that trains and connects reporters to cover responses to problems.