
The world has less than six years to achieve the United Nations’ Sustainable Development Goals, but only 17% of targets are on track, and over one-third are stalled or regressing. In light of this situation, the need to innovate to accelerate progress has never been more pressing.
Artificial intelligence, or AI, has emerged as a potential tool for boosting momentum among an increasing number of development practitioners. However, AI also runs the risk of widening inequalities if left unchecked. So how can the right framework be created to leverage the technology for the greater good?
Devex and EY teams surveyed more than 1,000 professionals working in SDG-aligned fields online between February and May 2024 to get their views on AI as a tool for inclusive development, complemented by interviews and roundtable discussions with experts. The findings are outlined in the report, Inclusive innovation: An inside look at AI’s potential to achieve the Sustainable Development Goals.
Encouragement was shown by the fact that two-thirds of respondents said they already regularly use AI tools. Most also agreed that AI can accelerate progress toward the SDGs, and aid inclusive development. At the same time, many are still to be convinced of the merits of the technology for these goals, suggesting that much work remains to be done.
Here are five key findings from the report on how the potential of AI can be harnessed for the greater good.
1. Connectivity and accessibility are prerequisites to effective AI
Advanced technologies can provide essential services and opportunities to underserved populations. However, this factor becomes redundant if access is not there, precluding the use of these technologies.
Some 17% of survey respondents identified limited technology access as a central barrier to using AI tools. This figure rose to 28% in low-income countries. Around a third of the global population also remains unconnected to the internet, creating a sizable hurdle. Yet technology availability itself is not the only barrier to access: more than a third of respondents, for example, also referred to a lack of skills among people for using AI.
Nonetheless, if meaningful access can be achieved, it can lead to a self-reinforcing “virtuous cycle” of AI. In this scenario, improved access leads to the collection of better data, further enhancing performance.
Among survey respondents, the key perceived benefit of using AI for an inclusive approach to the SDGs is improved decision-making through data-driven analysis, followed by improved monitoring, transparency, and accountability. Such technology can also help bridge the digital divide through means such as translation tools and AI-powered smart glasses.
2. Building local ownership of AI solutions is key to truly inclusive development
Local populations must be equipped with AI-related digital skills and knowledge to fully capitalize on the technology’s potential. Knowledge and awareness were identified as the key enablers for AI, followed by access to training. At the same time, less than half of respondents in all regions worldwide believed they had sufficient skills and knowledge to incorporate AI in their work.
“One of the biggest efforts that many educators have to focus on is ensuring we’re building digital literacy skills from a very young age, including computational thinking and the use of algorithms,” said Carol O’Donnell, director at the Smithsonian Science Education Center.
Participants at the roundtable also emphasized the need for policies that foster workforce development through AI skills building, helping empower populations to harness tools for resolving the most pressing challenges in their communities. The belief is that initiatives will only be truly inclusive and sustainable by harnessing a human-centered approach.
To aid innovation in low- and middle-income countries, or LMICs, the Gates Foundation recently awarded a host of grants to local innovators for programs that incorporate AI-enabled large language models to resolve challenges in areas including health, gender equality, and economic development.
3. Strengthening confidence, trust, and transparency within AI and the data it returns helps promote AI that protects rather than exploits vulnerable populations
Data used by AI systems can pose considerable risks when it comes to misinformation, discrimination, and exploitation of personal information. Among survey respondents, the biggest risk perceived in using AI was the potential for misinformation and manipulation of data. However, a higher percentage were concerned about this in Europe than Africa, at almost three-quarters compared with 59%.
Conversely, a higher percentage of respondents in Africa than in Europe were concerned about exploitation of personal data and the risk of job displacement caused by AI. One significant worry is that AI systems are often developed in the global north, meaning they may not be ideal for contexts in the global south and their algorithms can potentially perpetuate biases and discrimination toward marginalized groups.
Respondents identified migrants and refugees as the group they felt was most vulnerable to data biases, with 87% citing this section of society. But high percentages also cited groups including racial and ethnic minorities, people with disabilities, the elderly, and women and girls.
Yet if used effectively, AI can help to uncover hidden patterns of discrimination and mitigate bias, paving the way for more equitable systems in areas such as health care and recruitment. Diverse AI is one example of a volunteer-led organization aiming to grow diversity by supporting a range of communities in AI through collaboration, education, and research. Ultimately, generating inclusive data is key to building trust in AI systems.
4. A strong enabling environment promotes AI-driven inclusive innovation while mitigating potential risks
Just over a quarter of survey respondents identified a lack of policies and regulations as the main barrier to implementing AI-driven initiatives for the SDGs. This is seen as more of a concern in high-income countries, while LMICs attached equal importance to the lack of access to AI training and skill development.
Can the UN tame artificial intelligence?
Can United Nations Secretary-General António Guterres prevent artificial intelligence from wrecking the world's future?
Meanwhile, respondents called for a global regulatory and strong ethical framework for AI that prioritizes the needs of vulnerable populations, as well as an international watchdog to monitor compliance. In recognition of these needs, major development players are drafting guidelines on the use of AI and technology for the SDGs. The U.N. is, for example, drafting its Global Digital Compact, which outlines shared principles for a secure digital future for all.
Criminals may be more likely to target marginalized communities in countries with less regulation, which means policymaking is needed so that individuals from these communities can safely participate in AI-driven SDG programs. Participants at the roundtable suggested a multicountry, multisector approach to developing a unified ecosystem for AI, seen as key for involving those whose need to benefit is most urgent.
“If the right policies aren’t put into place, [underserved communities will] be forced into a defensive posture on emerging technology, versus a proactive and positive approach,” said Katherine Boiciuc, EY Oceania chief technology and innovation officer.
5. Creating alliances and opportunities for strategic collaboration can turbocharge the potential of AI for inclusive development
SDG 17 highlights the importance of multistakeholder partnerships for boosting progress toward all the goals. Survey respondents were in strong alignment on this point, with up to three-quarters considering public and private sector collaboration crucial to harnessing the potential of AI for the SDGs. At the same time, nearly a third cited differences in priorities and interests as a top challenge.
Nevertheless, several alliances have already been forged around AI and the SDGs. These include the likes of Artificial Intelligence for Social Innovation, an initiative by the Schwab Foundation for Social Entrepreneurship in partnership with Microsoft. The initiative unites a diverse group of stakeholders with experience in AI and social innovation, including support from the EY organization.
The $2 million Partnership for Good Grant initiative was also launched by U.S.-based software company ServiceNow this year to support nonprofits using AI, backed by private sector collaborators Accenture, the EY organization , KPMG, and NewRocket.
Those beyond the private and public sectors must also be involved for a fully inclusive approach, such as international forums, civil society, and nonprofit organizations.
To harness the benefits of AI, leaders therefore need to embrace a collective outlook. They also need to prioritize access for disadvantaged groups, foster local ownership of AI, and equip individuals with the knowledge and means to use it responsibly and effectively. Through such means, AI can be leveraged as a force for good and a way to meet the SDGs, leading to an equitable and sustainable future.
To get more insights on unlocking AI's potential to accelerate progress toward the SDGs, read the full report here.
Disclaimer: This publication contains information in summary form and is therefore intended for general guidance only. It is not intended to be a substitute for detailed research or the exercise of professional judgment. Member firms of the global EY organization cannot accept responsibility for loss to any person relying on this article.