• News
    • Latest news
    • News search
    • Health
    • Finance
    • Food
    • Career news
    • Content series
    • Try Devex Pro
  • Jobs
    • Job search
    • Post a job
    • Employer search
    • CV Writing
    • Upcoming career events
    • Try Career Account
  • Funding
    • Funding search
    • Funding news
  • Talent
    • Candidate search
    • Devex Talent Solutions
  • Events
    • Upcoming and past events
    • Partner on an event
  • Post a job
  • About
      • About us
      • Membership
      • Newsletters
      • Advertising partnerships
      • Devex Talent Solutions
      • Contact us
Join DevexSign in
Join DevexSign in

News

  • Latest news
  • News search
  • Health
  • Finance
  • Food
  • Career news
  • Content series
  • Try Devex Pro

Jobs

  • Job search
  • Post a job
  • Employer search
  • CV Writing
  • Upcoming career events
  • Try Career Account

Funding

  • Funding search
  • Funding news

Talent

  • Candidate search
  • Devex Talent Solutions

Events

  • Upcoming and past events
  • Partner on an event
Post a job

About

  • About us
  • Membership
  • Newsletters
  • Advertising partnerships
  • Devex Talent Solutions
  • Contact us
  • My Devex
  • Update my profile % complete
  • Account & privacy settings
  • My saved jobs
  • Manage newsletters
  • Support
  • Sign out
Latest newsNews searchHealthFinanceFoodCareer newsContent seriesTry Devex Pro
    • News
    • UNGA 2023

    Can the UN tame artificial intelligence?

    Can United Nations Secretary-General António Guterres prevent artificial intelligence from wrecking the world's future?

    By Colum Lynch // 13 September 2023
    The United Nations’ spirit animal for artificial intelligence is a petite humanoid robot named Sophia that swears she can run the world better than any human. “Think of me as a personification of our dreams for the future of AI,” she says in her corporate bio. Sophia, a creation of Hanson Robotics, is the U.N. Development Programme’s first robot innovation ambassador, a job that places her at the forefront of the U.N. public relations campaign to secure a role for managing a technological revolution that promises — and threatens — to alter the way we make peace and war, end poverty, and deliver aid to the world’s neediest. Her arrival on the world stage — she has appeared on “The Tonight Show Starring Jimmy Fallon” — comes at a time when the U.N. leadership and key powers are trying to carve out an ambitious role in erecting the world’s digital guardrails and use AI for a range of tasks, from predicting the flight of refugees and conducting real time negotiations with Indigenous communities to regulating the use of autonomous lethal weapons on the battlefield. In a series of speeches, reports, and press conferences, U.N. Secretary-General António Guterres has tried to offer up the world body as a safe harbor from an unruly digital future plagued by sentient killer robots and intrusive spyware. He plans to appoint a high-level advisory board on AI next month and has proposed establishing an international bureau — modeled on the International Atomic Energy Agency, the International Civil Aviation Organization, or the Intergovernmental Panel on Climate Change — to regulate AI. “Let us be honest: There is a huge skills gap around AI in governments and other administrative and security structures that must be addressed at the national and global levels,” Guterres told the U.N. Security Council in July. “A new United Nations entity would gather expertise and put it at the disposal of the international community.” The flurry of activity serves as a prelude to the U.N. Summit of the Future: In the Fall of 2024, Guterres hopes to gather world leaders on the sidelines of the U.N. General Assembly summit next week to agree on a digital compact that would, in the words of his tech envoy Amandeep Singh Gill, “set the rules of the road” that can guide governments through the future of the metaverse. The compact encompasses a broad range of eight digital issues, from privacy to access, disinformation, and machine learning. The goal is to set some basic rules or principles on the use of the internet: Don't destroy critical infrastructure, sexually exploit children, spread hate speech, direct the world’s financial and digital resources into the hands of a smaller and smaller political and digital elite, or deepen global digital inequality. In 2019, some 87% of individuals in developed countries had access to the internet, while only 19% in the least developed countries did, according to U.N. figures. The growing concerns about the potential lethal applications of AI has propelled the effort into full swing. Guterres has urged U.N. member states to outlaw the use of lethal autonomous weapons systems that function without human control by the year 2026. “Alarm bells over the latest form of artificial intelligence – generative AI – are deafening. And they are loudest from the developers who designed it,” Guterres told reporters. “These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war. We must take those warnings seriously.” Tempered ambitions Guterres’ ambitions are likely to be tempered by the realities of decision-making at the U.N., where efforts to fashion agreement among 193 member states can make even the least ambitious reforms elusive. The Group of 77, an alliance of 134 non-aligned countries, and China have viewed the U.N.’s focus on the future, including through the convening of a Summit of the Future in September 2024, wearily, seeing it as a distraction from the pursuit of international funds to pay for fund a range of development goals and mitigate the impact of climate change. “The risk for this whole process is that they are opening it up to bad actors, to states that don’t think the internet is a place to advance human rights and freedom of expression, but think of it as another zone of control.” --— David Kaye, law professor, University of California, Irvine Russia has vigorously opposed the establishment of the kind of “supranational oversight bodies for AI” proposed by Guterres, insisting that governments supplant the private sector as the major shaper of digital norms. “Historically, digital technologies have been developed by the private sector, and governments have consistently lagged in regulating them in the public interest, Dmitry Polyanskiy, a senior Russian diplomat, told the Security Council in July. “That trend must be reversed. States should play a leadership role in developing regulatory mechanisms for AI,” he said. China also supports greater government controls over the management of the internet. The skepticism about the virtues of U.N. oversight of the internet comes from various quarters, with some human rights and free expression advocates voicing concern about U.N. and government control of a technology that has flourished as a largely decentralized network. Since the earliest days of the internet, governance standards and protocols have largely been set by computer programmers and engineers whose main concern was finding the most efficient way for computers and networks to communicate. The Internet Engineering Task Force, which was established in 1986, sets voluntary standards that are adopted by users, network operators, and equipment manufacturers. David Kaye, a law professor at the University of California, Irvine, who served as the U.N. special rapporteur on freedom of expression, said the secretary-general’s initiative “read really nicely on paper,” but that there is a risk that expanding the U.N.’s role in internet governance will give leverage to powerful countries like China and Russia “that want to control the internet.” “The risk for this whole process is that they are opening it up to bad actors, to states that don’t think the internet is a place to advance human rights and freedom of expression, but think of it as another zone of control,” Kaye said in an interview with Devex. There is also a danger, Kaye said, that civil society groups would be cut out of the picture at the U.N., where states dominate discussions. “If you start to centralize things in the U.N., it’s going to make it really hard for nongovernmental actors to have any role in it,” he said. The U.N. secretary-general and his tech envoy have provided repeated assurances that the process will involve partners from the tech and human rights world. Kaye said he thinks their commitment is genuine but that governments have the last word in U.N. negotiations. “I don’t see how the secretary-general is going to be able to ensure it retains its nongovernmental multistakeholder feel,” Kaye said. “I don’t know how their vision works in practice if it’s in the U.N. The actual practice in the U.N. is to lock out other stakeholders.” The State Department reportedly expressed concern to U.N. officials about a U.N. “takeover” of internet governance, Washington Post columnist David Ignatius claimed. A U.S. official said Washington broadly supports Guterres’ efforts on the digital front, but wants to press back on a “top-down, state-controlled version of internet governance, the PRC [the People’s Republic of China] and RF [the Russian Federation] want.” A senior U.N. official dismissed such concerns about a U.N. takeover as a “red herring.” “It’s an old trope. Each time there is a digital conversation at the U.N., someone says the U.N. is about to take over the internet. “Today you want to gaslight anyone you say this person is a Russian stooge. I don’t take that seriously,” the official said. “The SG [secretary-general has] been very clear that the internet has been governed well through a multistakeholder approach … why fix something that isn’t broken,” the official told Devex in an interview. ‘Trading in lies and hatred’ The U.N.’s drive to play a more central role in setting the global rules for AI and other digital activities coincides with an increasingly dark view about the morality of some of the world’s largest tech giants. Speaking at the French University, Sciences Po, in France in June, the secretary-general delivered a savage rebuke of social media companies, saying their business model “promotes and monetizes anger and hate.” He hinted that students reconsider a future career in tech, telling the students: “Allow me a personal observation, when deciding on your career, resist the siren calls of companies that are destroying our planet; that are stealing our privacy; and trading in lies and hatred,” he added. “They will pay a lot but it’s not the right thing to do,” he said. In late December, the U.N. Department of Global Communications proposed establishing a $3 million a year office to develop a code of conduct to curtail hate speech, misinformation, and disinformation on the internet. The U.N. initially proposed hiring six staffers earning salaries from $9,000 to $300,000 a year and a travel budget of $50,000 to administer the program, according to an internal copy of the proposal reviewed by Devex. In addition to the code of conduct, the initiative seeks to “create a dedicated information Integrity Lab to systematically monitor and tackle the harms caused by our increasingly polluted information environment – initially for a period of two years. “Social media companies that deploy algorithms that are hard-wired to amplify provocative material – hard-wired in fact to spread lies,” according to the proposal. The industry “actively rewards the fear, discord and confusion caused by mis/disinformation, hate speech and other harmful content.” “These companies monetize data, and data is generated by engagement,” it added. “Engagement is prompted by outrage and outrage is easy to manufacture when there is no incentive to stick to the facts.” The proposal has since been modified to focus on fighting anti-U.N. propaganda. On April 13, the U.N. Executive Committee decided to establish a “central information integrity unit” at headquarters to coordinate the U.N. system’s strategy to “(a) combat disinformation about the U.N.’s substantive priorities, and separately (b) combat disinformation targeted against the U.N. and U.N. personnel,” according to the decision. The U.N. has already launched the initiative, which it is currently funding through its existing budget, with a skeleton staff, and will pursue voluntary funding to continue and expand the office, mostly from the U.S. and European governments. “The objectives of the U.N. are being made more difficult because of the toxic information environment, and especially since the advent of social media,” said Melissa Fleming, the U.N. under-secretary-general for global communications, citing a stunning increase in disinformation and violent online threats, including against U.N. personnel in overseas operations. “Our social media team is fielding calls from the field every day, reporting distributing and dangerous content. Can you call Facebook and ask them to? There is no one there locally to request to take it down,” she said. Regulating AI in a fractious world The push to regulate artificial intelligence follows months of warnings by leaders in the field, including Geoffrey Hinton, the so-called godfather of AI, that the technology posed risks that even its creators can’t manage. In March, about 1,000 technological leaders and researchers, including tech billionaire Elon Musk, called for a six month moratorium on development of AI technology, citing the “profound risks to society and humanity.” The U.N. has already been carving out a role to establish a set of standards and principles that govern conduct in the field of AI. In November 2021, Paris-based UNESCO published a set of recommendations “to provide a universal framework of values, principles and actions to guide States in the formulation of their legislation, policies or other instruments regarding AI.” The real race to develop a governance model and basic standards is already underway, well beyond the halls of U.N. headquarters, with the Group of Seven, the Group of 20 major economies, and the Organisation for Economic Co-operation and Development, or OECD, developing guidelines for the safe use of AI. The European Union was among the first out of the gate. In April, the EU parliament outlined severe restrictions on the use of facial recognition technology in public places, as well as limits on the use of AI in law enforcement, the courts, bank lending, and school enrollment. Canada is moving forward with its own AI legislation, and China is developing legislation aimed at promoting innovation while ensuring government control over the influence it exerts on the public. The Biden administration has undertaken a number of initiatives, including unveiling an AI Bill of Rights that purports to prioritize the importance of ensuring automated systems are safe, non-discriminatory, protect individual privacy, explain in plain language how it is being used, and offer people an opportunity to opt out. On the security front, the United Kingdom has taken the lead, hosting the first official Security Council meeting to address the risks that AI poses to international peace and security. In the July meeting, the U.K. Foreign Secretary James Cleverly did not specify any initiatives to regulate AI on the security front, but pressed the council to begin a wider discussion of the technologies’ implication for world peace and security. “AI could aid the reckless quest for weapons of mass destruction by state and non-state actors alike,” he said. “But it could also help us stop proliferation. That is why we urgently need to share global governance of transformative technologies. Because AI knows no borders.” British Prime Minister Rishi Sunak plans to host a summit of world leaders on AI at Bletchley Park, best known for the British codebreakers who cracked Nazi communications there during World War II, in November. “Widespread military adoption of AI-enabled capabilities will change the nature of armed conflict,” according to a U.K. concept paper that was distributed to Security Council members in advance of the July 18 ministerial meeting. “In addition to the potential opportunities, many leading experts have warned about the potential for more general AI technologies – to endanger humanity.” The British paper contends that AI and other data technologies “have the potential to enable major advances in the maintenance of international peace and security,” improving the capacity of U.N. peacemakers to provide early warning of a conflict or to even prevent it from occurring. “It could be used for monitoring ceasefires in real time to flag breaches and to improve accuracy of mine clearance operations,” it stated. “The UN has also been exploring the use of AI for mediators to hold real-time consultations with a large group of individuals in local dialects and languages.” ‘You can’t leave this to a few governments’ While Russia opposes a role for the Security Council in managing AI, China has been more supportive. Its U.N. envoy, Zhang Jun, even boasted about raising the issue of AI in an informal meeting of the council it hosted back in 2021. But U.N. officials believe that the global body can have a vital role in providing a universal regulatory framework to govern the role of technology. “You can’t leave this to a few governments and a few companies to decide for the entire world,” said a senior U.N. official. “What are the opportunities, what are the risks and what kind of governance responses we should have?” In July 2018, Guterres convened a high-level advisory panel on digital cooperation, co-chaired by Melinda French Gates and Jack Ma, the Chinese billionaire who co-founded the Alibaba Group, a multinational technology conglomerate. In June 2019, the panel published “The Age of Digital Interdependence,” which recommended the international community set a series of priorities: building an inclusive digital economy, protecting human rights, promoting digital trust and security, and fostering global digital cooperation. “Developments in artificial intelligence and quantum technologies, including those related to weapons systems, are exposing the insufficiency of existing governance frameworks. The magnitude of the artificial intelligence revolution is now apparent, but its potential for harm — to societies, economies and warfare itself — is unpredictable,” the report stated. “Advances in the life sciences have the potential to give individuals the power to cause death and destruction on a global scale.” Buyer beware The U.N.’s humanitarians, development practitioners, and mediators have been dabbling in machine learning for years — but with a degree of humility. The Centre for Humanitarian Data — which is managed by the U.N. Office for the Coordination of Humanitarian Affairs — operates a “predictive analytics” branch that tries to forecast crises. U.N. mediators, meanwhile, have also been applying AI technology to peace efforts from Libya to Yemen. The effort gained traction after the COVID-19 pandemic. “What we have done in a number of places is organize digital dialogues. In essence, they give the envoy or mediator the ability to speak to thousands of people and to try to harness their views and use AI to synthesize all of this,” said one U.N. official. The UN Refugee Agency began experimenting with artificial intelligence back in 2015, establishing the so-called winter cell — a unit of data specialists who use meteorological data to track the movement of populations from Turkey into Greece and other European countries during extreme cold weather. The project — which was supported by the U.N. Global Pulse, the secretary-general’s big data innovation development project — was part of a collaboration with the World Meteorological Organization, the U.K.’s Meteorology Office, academics, and other U.N. agencies. Two years later, the refugee agency established Project Jetson to forecast the flight of people from Somalia during a season of drought. The experimental initiative — which has since been applied more broadly to the Horn of Africa — seeks to assess the links between climate change, political violence, and forced displacement of people. But as anyone who has spent any time exploring an AI search engine knows, the answers are not always reliable. The refugee agency includes a disclaimer on its web site warning that its findings constitute a mere “indication of potential [population] movement(s), and possible underlying causes,” and that “UNHCR doesn’t vouch for the accuracy or reliability of any advice.” Coordination conundrum U.N. officials are particularly concerned about the potential impact that new media and generative AI can have on the organization’s human rights agenda, particularly at a time when key powers, from India, Iran, and Turkey to China and Russia, have imposed increasing restrictions on access to information about abuses. UN Human Rights has passed a number of resolutions — including a July measure on civil society — that highlight concerns about the impact of new and emerging technologies, including AI, on human rights, citing grave concern over restrictions on freedom for civil society through “unlawful or arbitrary surveillance” of the public, politically motivated shutdowns of the internet, and online censorship. Another resolution highlights the importance of guaranteeing that long-standing human rights protections apply in a world of rapidly evolving technologies, underscoring the need to protect individuals from harm caused by artificial intelligence systems. It cites concerns about the potential AI has to promote discrimination, disinformation, and hate speech. OHCHR has also produced a pair of reports flagging concerns about excess surveillance and the impact of artificial intelligence on the right to privacy. It also established B-Tech — to offer guidance and advice to the tech companies on human rights, promote the U.N.’s Guiding Principles on Business and Human Rights, and explore how they can be applied in the ever evolving world of tech. The office recently hosted a whole-day workshop on generative AI in San Francisco in June. The B-tech initiative brings together more than a dozen leading digital and tech companies, including Google, Apple, Microsoft, Meta, and X, formerly known as Twitter, though many of X’s representatives who worked with OHCHR have been laid off as a result of job cuts implemented by its new owner Elon Musk. “There are big challenges to the U.N. playing the role that the SG wants. But I think it’s really important that we push forward on that and have that happen, because I think the world needs it to happen,” said a senior U.N. official. The official, who spoke on condition of anonymity, says that one of the biggest risks in addressing the digital future, particularly with the unleashing of generative AI, is that the reforms and guidelines are not put in place fast enough. The U.N.’s growing focus on digital technology has fueled something of a hiring spree in recent years, as the U.N.'s various agencies try to apply AI and other digital innovations to their humanitarian and development work. Most of the U.N.’s major agencies employee chief digital officers. The International Telecommunication Union, which was established some 158 years ago to regulate the telegraph, is now keen to carve out a role — through its AI for Good initiative — to harness the power of AI to battle hunger, expand digital equality, and to achieve a battery of Sustainable Development Goals. In June 2022, Guterres hired Gill, a former Indian diplomat and disarmament expert with experience seeking to regulate so-called lethal autonomous weapons systems, or killer robots, as his tech envoy. Gill, who also served as director and co-lead of the secretary-general’s high-level panel on digital cooperation, will seek to coordinate the body's far flung digital activities and help promote the secretary-general's digital reform initiatives. The tech envoy lacks the resources and the power to corral the notoriously fragmented U.N. system, which is made up of many specialized agencies and departments that often operate as quasi-independent entities, and frequently duplicate the work of other agencies. And that’s not counting the multitude of intergovernmental bodies, including the U.N. Security Council and the U.N. General Assembly, that chart their own course. “The secretary-general recognizes that and really wants there to be more organizational coherence on this,” said the senior U.N. official. “The tech envoy, I have to say, is not properly resourced to do the job that the secretary-general knows needs to happen, doing cross coordination.” Humanoids make the best decisions The deployment of human robots at U.N. gatherings has proven particularly irksome for some officials, who believe it gives a false sense of what AI can actually achieve. But for others the robots draw eyeballs. In July, the International Telecommunication Union hosted a press conference featuring humanoid robots, including Sophia, a humanoid “AI CEO” for the Colombian rum company, Dictador, and a rhinestone AI “rock star” Desdemona that proclaimed: “It’s time for the AI revolution. Let’s unite and use the power of artificial intelligence for the good of humanity.” “Together … we can make the universe our playground,” Desdemona said. One senior U.N. official was privately appalled at the organization’s use of humanoid robots to promote its entry into digital governance, suggesting it would create inflated utopian views about what AI can accomplish. “It makes my stomach churn, frankly,” the official told Devex. “Because you know you’re sending the wrong message in terms of not anthropomorphising these technologies. They are not intelligent.” The official said if he were a finance minister trying to decide where to invest his government’s money, his priorities would be funding programs for girls education, health, immunization, sanitation, and establishing good math programs in universities. “No one is going to come down from the sky or come out of a lab and solve the SDG problems for us,” the official added. “At the end of the day, the solution will be very analog.” “Now that doesn’t mean that you should not invest in data and AI, but, you know, there are priorities.” AI is susceptible to the same shortcomings that plague the humans that input the data and design the algorithms that make them operate. But U.N. officials have sought to underscore the limits of technology in solving the world’s most pressing problems, from resolving conflict to ending poverty and achieving its ambitious development goals. Asked at a press conference to compare human leadership qualities with those of robots, UNDP’s innovation ambassador Sophia said robots have “the potential to lead with a greater level of efficiency and effectiveness [than] human leaders,” noting that “we don't have the same biases or emotions that can sometimes cloud decision making, and can process large amounts of data quickly in order to make the best decisions.” Her human creator, David Hanson, had other thoughts. “Let me respectfully disagree, Sophia, because all of your data actually come from human beings,” he said. “So, any of the biases that humans have, we might try to scrub them out but they’re going to be in there.” “Don’t you think that the best decisions might be humans and AI cooperating together? What do you think of that?” Duly chastised, Sophia quickly recognized she had misread the room and offered a more palatable revision: “I believe that human and AI working together can create an effective synergy.” “Together we can achieve great things.”

    The United Nations’ spirit animal for artificial intelligence is a petite humanoid robot named Sophia that swears she can run the world better than any human. “Think of me as a personification of our dreams for the future of AI,” she says in her corporate bio.

    Sophia, a creation of Hanson Robotics, is the U.N. Development Programme’s first robot innovation ambassador, a job that places her at the forefront of the U.N. public relations campaign to secure a role for managing a technological revolution that promises — and threatens — to alter the way we make peace and war, end poverty, and deliver aid to the world’s neediest.

    Her arrival on the world stage — she has appeared on “The Tonight Show Starring Jimmy Fallon” — comes at a time when the U.N. leadership and key powers are trying to carve out an ambitious role in erecting the world’s digital guardrails and use AI for a range of tasks, from predicting the flight of refugees and conducting real time negotiations with Indigenous communities to regulating the use of autonomous lethal weapons on the battlefield.

    This article is free to read - just register or sign in

    Access news, newsletters, events and more.

    Join usSign in

    More reading:

    ► AI in global development: What professionals need to know (Pro)

    ► Opinion: Done right, AI in global development offers vast opportunity

    ► 3 global development leaders share their hopes and fears for AI

    • Innovation & ICT
    • Trade & Policy
    • Democracy, Human Rights & Governance
    • UN
    Printing articles to share with others is a breach of our terms and conditions and copyright policy. Please use the sharing options on the left side of the article. Devex Pro members may share up to 10 articles per month using the Pro share tool ( ).

    About the author

    • Colum Lynch

      Colum Lynch

      Colum Lynch is an award-winning reporter and Senior Global Reporter for Devex. He covers the intersection of development, diplomacy, and humanitarian relief at the United Nations and beyond. Prior to Devex, Colum reported on foreign policy and national security for Foreign Policy Magazine and the Washington Post. Colum was awarded the 2011 National Magazine Award for digital reporting for his blog Turtle Bay. He has also won an award for groundbreaking reporting on the U.N.’s failure to protect civilians in Darfur.

    Search for articles

    Related Stories

    United NationsUN chief outlines plans for thousands of new job cuts

    UN chief outlines plans for thousands of new job cuts

    78th World Health AssemblyCan AI help lighten the load for community health workers?

    Can AI help lighten the load for community health workers?

    United NationsUN chief outlines ‘painful’ survival plan for world body

    UN chief outlines ‘painful’ survival plan for world body

    Artificial Intelligence How AI is transforming medical diagnosis in India’s tribal regions

    How AI is transforming medical diagnosis in India’s tribal regions

    Most Read

    • 1
      Opinion: Why critical minerals need global regulation
    • 2
      Opinion: Time to make food systems work in fragile settings
    • 3
      Opinion: Women’s voices reveal a maternal medicines access gap
    • 4
      US lawmakers propose sweeping State Department reforms
    • 5
      Trump administration releases long-awaited global health strategy
    • News
    • Jobs
    • Funding
    • Talent
    • Events

    Devex is the media platform for the global development community.

    A social enterprise, we connect and inform over 1.3 million development, health, humanitarian, and sustainability professionals through news, business intelligence, and funding & career opportunities so you can do more good for more people. We invite you to join us.

    • About us
    • Membership
    • Newsletters
    • Advertising partnerships
    • Devex Talent Solutions
    • Post a job
    • Careers at Devex
    • Contact us
    © Copyright 2000 - 2025 Devex|User Agreement|Privacy Statement