The World Economic Forum on Thursday launched the Global AI Action Alliance to maximize the benefits and minimize the risks of artificial intelligence.
The initiative aims to “speed up the adoption of inclusive, trustworthy, and transparent artificial intelligence,” Kay Firth-Butterfield, WEF’s head of AI and machine learning, said during the virtual Davos Agenda this week.
The Global AI Action Alliance is one of a number of efforts focused on ethical AI, considering the potential dangers of tools like facial recognition. What makes the new initiative unique is the involvement of stakeholders from across sectors, including groups that have not historically been involved in the AI ecosystem, said Vilas Dhar, president of the Patrick J. McGovern Foundation.
A small subset of society, including academics, technologists, and policymakers, are involved in the development of AI technologies, Dhar said, while speaking on Thursday’s panel.
“It’s not my view that they’re necessarily in the wrong hands,” he said. “But it’s certainly the case that they are in too few hands to shape our common future.”
Dhar said he sees the Global AI Action Alliance as a forum for a diverse set of actors to come together to create AI in support of the Sustainable Development Goals, and said one of his metrics of success is to see concrete projects developed through this collaborative process in the first year.
“The inhibitor is trust. Can we trust these technologies to truly do what we’d like them to do?”— Arvind Krishna, chairman and CEO, IBM
“The idea is to bring together not just technology creators or policy regulators, but the voice of civil society, to come in and say let’s start first from the problem we want to solve, the opportunity to create a better world, and then build the products we need in order to get to that outcome,” he told Devex.
The alliance has plans to host an AI youth council to encourage young people to get involved in the development of ethical AI.
“GAIA will be an important vehicle for us to see this world through children’s eyes,” Henrietta Fore, executive director at UNICEF, said on the panel. “Principles should be taken into practice, and that’s what we and everyone in this session should be thinking about. How do we actually take this into the real world? How do we design children into the use of these systems? What about government regimes? How do we reflect human and child rights and diversity and inclusion?”
She stressed the importance of collaboration between the Global AI Action Alliance and other efforts underway to support AI for good. Dhar mentioned there are plans underway to collaborate with efforts like the Global Partnership on AI.
“But there’s a big inhibitor,” he said. “The inhibitor is trust. Can we trust these technologies to truly do what we’d like them to do? To do what good policy would dictate they should do?”
Every organization developing or using AI has a responsibility to make it a force for good, Krishna said. Businesses must be accountable for how they develop and use AI, including all the data associated with it, he said. And governments play an important role in regulation, Krishna added.
“You can’t regulate with a blunt hammer because that stops innovation,” he said. “So you should regulate based on the risk. If the risk is going to be small, be light on regulation. If the risk could be heavy, be heavier on regulation.”
Krishna said the Global AI Alliance will play an important role in helping businesses and governments navigate these decisions.
The global conversation on AI has focused on building responsible and ethical AI products, Dhar said.
“For me, it’s really important we refocus that conversation to focus on responsible and ethical society, powered by those products,” he told Devex.
The Global AI Action Alliance is one way the foundation seeks to shift that conversation, and Dhar said the effort will help inform its grantmaking process, and serve as a forum where its grantees can have access to a community of support.