Artificial intelligence has crept into the social sector — from the criminal justice system, to disaster management, to health care. In global development, we have begun using it to predict who will drop out of HIV care, identify poor households in data scarce areas, and determine why women are dying in childbirth.
Advances in areas such as reinforcement, deep learning, and causal machine learning are already offering multiple ways to advance social good.
“The AI field is like a hip teenager in a rich household, fueled by unlimited money, as the tech giants race to optimize the next click online.”
— Sema Sgaier, co-founder and executive director, Surgo FoundationAs a scientist who applies AI in her development work, I find the enormous potential exhilarating. Yet there remains a startling lack of collaboration between these worlds.
Two key elements are missing that we must urgently address.
The right people at the table: At a recent conference on AI, I was shocked to see no government represented, civil society barely present, and not a single large global donor. Above all, the voice of the global south is missing in these discussions, dominated by techies from corporates such as Google, Facebook, and Bloomberg, all unveiling the latest algorithms and their applications to industry. With AI rapidly affecting global development, we need to proactively engage to shape the work.
The right projects: Development experts working in resource-poor settings are poised to embrace AI-driven solutions. But most of the papers on development topics come out of universities working on projects where the data lies, not where we have the most important questions. With AI shaping our lives, we can’t let the technical community alone decide what problems should be solved for and how.
Innovation at Rockefeller Foundation: Data science for social good
Tariq Khokhar, managing director and senior data scientist at the foundation, speaks with Devex.
From my perspective, the AI field is like a hip teenager in a rich household, fueled by unlimited money, as the tech giants race to optimize the next click online. But this teenager is now being reminded by the more mature scientists in the field that there are norms of doing good science. And just like an adolescent, it is going through a period of crisis and reflection.
The fundamental issue is the question of trustworthiness, which is directly linked to the ability to explain what is happening. Often, data scientists are not able to explain why an algorithm makes a recommendation. The layers of computation in the black box are so deep that we can no longer make sense of it.
Other times, recommendations are warped, such as Amazon’s screening tool that was inherently biased against hiring women because it was based on data from men. Already, such job-matching technologies are emerging in Africa, where data is similarly biased and disaggregated data by gender is lacking, putting women and girls at an extreme disadvantage.
These interrelated issues of trustworthiness, explainability, and bias are particularly acute in AI for development, where the most important social issues are a matter of life and death. That is why I believe global development leaders must urgently take these three steps:
1. Educate ourselves
If we want AI applied to problems that matter to us and in the most appropriate right way, we need to be much more informed customers of this new technology. Right now, AI is a magic box and we are all allured by it. We need to understand that this is not a silver bullet, but one that can complement other analytic and tech tools that we have.
We need to be aware of the kind of problems it can solve and its limitations. Just like statistics, AI and machine learning should become part of any curriculum that trains practitioners for global development.
In the meantime, we should all take the responsibility to learn more. There are excellent resources online, such as a recent report by the U.S. Agency for InternationaI Development on making AI work for international development.
2. Advocate for transparency
Before implementing a recommendation made by an algorithm, we must insist that data scientists can explain the math. Our data on developing countries is messy, with many holes, mistakes, and extreme disparities. We need to be careful that these algorithms don’t increase the chasm rather than bridge the divide.
5 ways global development professionals can support better practices in AI
Development professionals need to weigh in on AI. Here's how.
We should interrogate the data sets used and look for possible biases. If we are the generators of the data, the onus is even more on us to communicate the pitfalls and biases. In resource-poor settings, where there is often a shortage of trained personnel in a number of sectors such as health care, it’s easy to imagine how we may leapfrog to AI-driven solutions.
But this poses the risk of having no human to sanity-check the decision.
3. Shape the agenda
Most importantly, we need to step up and shape the agenda so that the application of this technology is aligned with our priorities. We need to pay close attention to the developments in this field to ensure our voices are in the room, asking the hard questions and pushing scientists to make the right choices.
We should also organize forums that purposefully bring the development and AI technical community together, and proactively reach out to AI experts to share our data so we can collaborate on problems that are important to us.
AI is a part of global development and will only increase in prominence. It’s time to swiftly get up to speed and play an active role in shaping how this field works for us, before it’s too late.