SAN FRANCISCO — Paula Goldman is wrapping up her second week at Salesforce, where she is leading efforts to evaluate the ethics of how technology is used at one of the fastest growing enterprise software companies.
Previously, she worked with the philanthropic investment firm Omidyar Network, where she led a new initiative called the Tech and Society Solutions Lab, which works with technologists to build responsibility into their core product and business decisions.
“If you haven’t thought about your strategy around data and AI, you don’t know what you’re saying ‘yes’ to.”
— Kay Firth-Butterfield, head of AI and machine learning, WEFWhile Goldman has always been enthusiastic about the positive impact that technology can have, she has become increasingly concerned about its potential for harm.
“This is all of our problem, it’s all of our legacy as an industry, and so we all need to own this as ours,” she said in a session last fall at Techonomy that focused on the need for a culture of ethics within tech companies.
Conversations on issues such as privacy, bias, and security are happening in Silicon Valley, due in part to some of the revelations about the unintended consequences of technology surrounding Facebook last year. But experts from the World Economic Forum and elsewhere say these discussions need to be more global in nature, including in those countries that could have the most to gain or lose depending on how these emerging technologies are developed.
Get the briefs from Davos 2019
Go inside this year’s events with Devex. Sign up to receive the latest from Davos, the World Bank annual meetings, U.N. General Assembly, and more.
A central question at the upcoming World Economic Forum Annual Meeting in Davos, Switzerland, is how to prevent the misuse of technology, and the conference will provide an opportunity for leaders in the global development community to build partnerships in order to embed social responsibility in the development of new technology.
While Salesforce is the first company to hire someone as its chief ethical and humane use officer, other tech companies are making ethics and responsibility more of a focus.
For example, Microsoft CEO Satya Nadella, who is a co-chair of the meetings in Davos, has talked about how artificial intelligence brings great opportunity but also great responsibility, meaning tech companies need to be grounded in ethics with all of their decisions.
And Google recently launched a new program called “AI for Social Good,” and launched a Google AI Impact Challenge, inviting nonprofits, academics, and social enterprises from around the world to submit proposals that will be eligible for funding and other support.
“It’s very easy to sit here in San Francisco talking about AI, but let’s make sure that we sit in places around the world and talk about AI as well,” Kay Firth-Butterfield, head of AI and machine learning at WEF, said in a talk on building ethical AI.
For example, WEF has partnered with UNICEF on how to advance the rights of children in the AI age — focusing on questions like the implications of bringing robots into the classroom, how smart toys that talk back might impact creativity for better or for worse, and ways to protect children’s privacy as AI-powered surveillance technologies become more far-reaching.
In an interview with Devex, Firth-Butterfield said that low- and middle-income countries have an opportunity to become leading markets for emerging technologies, as Rwanda has done with drones, due in part to a partnership with WEF.
But she also emphasized the importance of a national AI strategy, explaining that “if you haven’t thought about your strategy around data and AI, you don’t know what you’re saying ‘yes’ to.”
In Davos, government officials will talk about their plans to gain a competitive edge in the deployment of AI technologies. But a key question, posed in a session on the agenda, is: “What shared governance principles will help ensure that AI is deployed safely and ethically?” By focusing on developed countries, conversations on the impact of rapid technological change leave LMICs out, even though they might benefit most.
Unfortunately, affected communities are often missing from the conversation on technology, said Terah Lyons, executive director of the Partnership on AI. She spoke at Techonomy alongside Goldman, representing a multistakeholder initiative of more than 80 institutions working on ethical AI development. Lyons said tech companies cannot drive responsible technology outcomes alone, which is part of the reason the United Nations Development Programme joined leading tech companies in the Partnership on AI.
WEF is working with governments around the world on what it describes as agile governance for emerging technologies, and recently launched another center focused on the Fourth Industrial Revolution in India, on the model of its center in San Francisco, with plans to expand in other markets.
Governments and NGOs can be significant enablers of AI for good, said Michael Chui, a partner at the McKinsey Global Institute, who co-authored a recent report on ways AI can be used to tackle social problems.
“One of the bottlenecks that is preventing AI use from scaling up for social impact is around data accessibility: AI needs masses of data to be effective, and governments are some of the biggest generators and collectors of data,” he wrote in an email. “So there is a role as regulator, for sure, including around privacy, but also around ensuring that AI’s potential can be used more fully.”
5 ways global development professionals can support better practices in AI
Some are arguing that it’s important for development professionals to weigh in on AI. Here's how.
While a number of NGOs are starting to incorporate more AI into their everyday work, those examples tend to be few and far between, he said, advising NGOs to hire people with expertise to make AI work in the field.
Technology runs the risk of “making the most vulnerable people on the planet even more vulnerable,” said Derek O’Halloran, who leads WEF’s Future of Digital Economy and Society initiative.
“This is not a tech industry problem,” O'Halloran said. “This is everybody’s problem.”
He said he is concerned with how many organizations are taking what he described as an ad hoc versus a strategic approach to technology, explaining that society is rewriting the social contracts between individuals and organizations, and all sectors must join the conversation.