SAN FRANCISCO — Even without the heavy use of pesticides, pest damage can wipe out the crop yields of smallholder farmers.
Wadhwani AI, a nonprofit research institute focused on artificial intelligence for good, is working to use artificial intelligence to help smallholder farmers in India with pest control.
“Ultimately it is about AI facilitating human interactions because humans have better judgment.”— Topher White, CEO, Rainforest Connection
Instead of needing to manually identify and count pests in order to track and manage them, farmers will be able to use image classification models on their phones and get advice on what pesticides to spray and when.
Earlier this month, leaders from these 20 organizations presented some of their learnings on AI for social good at the Google AI Impact Challenge Summit. The general consensus was that while AI can help address societal problems, humans need to be in the loop too, so computers are not making the decisions for people.
The factors that make AI attractive to development actors also pose significant risk. Devex speaks with a new working group on AI to find out their approach to the responsible deployment of new technologies.
One issue is that farmers tend to accept the products and services that are presented to them by people who work for organizations that are trying to improve their productivity and increase their income.
“Farmers for a long time have been the solution acceptors and not the solution questioners,” said Dhruvin Vora, product manager at Wadhwani AI.
The onus is on AI practitioners to explain to those it serves how the technology works — including what its limits are and how it might fail.
In its work on pest control, Wadhwani AI has experts double-check AI recommendations that come with lower confidence thresholds, because if the intelligence is wrong, the cost of inaction in the face of pest infestation is worse than the cost of pesticides that end up being unnecessary.
AI is used in other ways to help with decision-making. Rainforest Connection records sounds from the forest, and listens for threats it can send to responders on the ground.
With support from Google, the organization is building an AI-powered platform for bioacoustic monitoring, which could help a range of stakeholders study the sounds of nature, said Topher White, CEO of Rainforest Connection, who spoke about using AI to fight illegal deforestation at the Google event.
Because Rainforest Connection is not using AI to serve vulnerable populations, in the same way as some of the other winners of the Google AI Impact Challenge are, the technology does not pose the same risks.
“We almost get a pass because of the work we do,” White said when Devex asked for his views on responsible AI, a major theme of the program.
“Where AI becomes useful is when it deals with large amounts of information and gets us closer to better decision-making. Where people get uncomfortable is when they feel like computers are making decisions for them. Ultimately it is about AI facilitating human interactions because humans have better judgment,” he said.
Making sure humans stay in the loop in all interventions can mitigate the risk of AI failing by involving people to double-check the results, look for bad recommendations, and guard against unintended consequences.
“You don’t want your algorithm to run unattended,” said Clara Nordon, director of La Fondation Médecins Sans Frontières, or the MSF Foundation, which was awarded a $1.3 million grant from Google to develop a smartphone app to help lab technicians diagnose antibiotic resistance in low resource settings. “Every step of the process on screen is validated by a human being.”
Organizations interested in working on AI for good must start by following the basic ethical guidelines that apply with or without AI, she said.
In the field of medicine that includes getting patient consent, being transparent about the ways that data is collected and managed, and protecting patient privacy, Nordon said.
Then AI brings with it new ethical challenges, for example the potential for bias in the model, which can perpetuate existing social inequities, she added.
The potential of AI for good cannot be realized without building public trust. But even when organizations have the best of intentions, experts said they should not expect that users blindly trust the solutions being presented to them. Rather, they have to earn the trust of people who are being offered these solutions.
And that same rule applies whether to farmers or to government officials.
Another awardee, Nexleaf Analytics, which created data solutions for cookstoves and vaccines, is using machine learning technology to build data models that can help countries estimate the impact of temperature on vaccines.
Countries can draw on that data to quantify the value of vaccines that are at risk, Nithya Ramanathan, CEO and co-founder of Nexleaf Analytics, said at Devex’s Prescription for Progress event in San Francisco.
She said the impact of AI for good projects depends not so much on the technology itself, but rather what approach the organization takes with the tools it has created, and how it works together with partners and communities.
“How do you put those tools in the government’s hands, but also make sure the government has ownership over their data, has very transparent algorithms, so that they’re not just trusting a black box?” she said.