
Generative artificial intelligence, or GenAI, has captured the attention of everyone from business leaders to individuals. A new survey from Accenture found that 97% of executives said GenAI will be transformative to their company. GenAI tools are now also the subject of dinner table conversations all around the globe. Students are using GenAI for support on their school essays, artists are using it to derive new forms of expression through all forms of media, and some people are using the technology to produce so-called deepfakes that challenge our perceptions of reality.
This level of democratization of AI seems to have an equal distinction between its early adopters looking to innovate and incorporate it into everyday lives and its detractors who feel the effects of digital fatigue. It is clear, however, that no matter where you fall in your opinion of AI, everyone is considering the philosophical and ethical consequences of continued human and technology integration.
The international development community is not immune from this discussion. Those who may have benefited from longer periodic curves of technology adoption in the past are now expected to move at the pace of polycrisis while balancing the ever-increasing demands for efficiency of resource allocation, intersectional insights among previously siloed development domains, and provisioning of data for the public good. As such, it is appropriate for organizations of all shapes, sizes, and missions to define their visions and strategies for data and AI ethics.
What use cases are impacted by advances in AI technology?
Of the many applications of AI that affect the international development sector, let’s explore three examples based on materiality to current operations for various types of use cases: GenAI, geospatial modeling, and traceability of value chains.
Opinion: Lessons from 20 years of Accenture Development Partnerships
For 20 years, Accenture Development Partnerships has been driving change across social, economic, and environmental issues, transforming the world for the better.
Current practical use cases of GenAI include content development for marketing or donor relationships. These tools can produce summary themes of program outcomes for reporting purposes or provide customized fundraising requests and messaging for specific donors. This simple usage of AI triggers some interesting questions to consider, such as whether AI-generated content should be disclosed to stakeholders and whether automated systems could ultimately damage the reputation of an organization.
AI-powered geospatial modeling techniques have a clear role to play in enabling monitoring and evaluation of remote programs in order to make them more viable and cost-effective. However, similar approaches are utilized more broadly by corporations, governments, and individuals, and so new risks emerge around biases in how data is being interpreted and explained through AI insights. Openly sharing these insights with many different stakeholders should occur with thoughtful consideration to the accountability the international development sector has in protecting vulnerable places, people, and communities from unexplainable, adverse impacts.
When considering supply chain traceability use cases, organizations must prioritize and augment their understanding of their role as data custodians for stakeholders affected by development activities. The international development sector should advocate rewarding “first-mile” data originators as the data they contribute is critical to AI algorithms — for example, a community engaging in a carbon crediting project, or a farmer seeking fair wages for export produce. As global commercial and industrial costs begin to be calculated and priced into some economies, such as in the European deforestation legislation, there is a great opportunity for data and algorithms to ensure everyone from producer to distributor benefits from the age of sustainability.
While there is no one-size-fits-all solution to AI risks, all organizations can mobilize activities now that put them on a path to defining and implementing their approach to ethical data.
Accenture Development Partnerships sees a need to generally increase the executive sponsorship of data initiatives. This may require new learning agendas, such as NetHope’s Artificial Intelligence Ethics for Nonprofits Toolkit or additional coaching opportunities across the organization, with the intent to strengthen shared commitments to the financial investment in data and AI while ensuring alignment with the organization’s ethical principles. Recent communications from the Rainforest Alliance describe how they are tackling this opportunity, which is clearly tied to core business operations and with a strong degree of endorsement from senior leadership.
As Accenture Development Partnerships engages with more organizations in the sustainable AI space, two areas of work are emerging. The first is around the governance of data needing to be deeply overhauled with the introduction of role-based access permissions to data and with clear transparency on where AI is used within the organization. If applicable, areas that require more robust management techniques or controls when dealing with particularly sensitive data, such as health or refugee demographics, should also be defined. The goal of these new policies is to ensure that the design principle of keeping a human in the loop is used when shaping and evaluating new data and AI use cases.
Another area of work is to start equipping organizations with technical solutions that would cater to data lineage — such as understanding provenance, data processing, and data access for relevant datasets — that can mask Personally Identifiable Information, or PII, and tooling to enable the audit of used algorithms. While deploying such tools to the entire data pool of an organization might be a big endeavor, starting with a limited set of use cases is the best way to practically learn about the appropriate approach to governance and technology to enable scaling AI thoroughly within an organization.
How cutting-edge technology helps rethink organizations' fundamentals
The use of AI within NGOs is still an emerging area, especially when looking beyond pilots. While caution should apply in accelerating further deployment, it brings a new kind of urgency to organizations with respect to their own data and the amount of data being generated across the sector. The immediate action is to think about data as a key asset, not only because of the insights it might yield but also as a resource that demands a certain level of protection. This is currently the most pragmatic step that organizations should take to confidently consider the potential use of AI for their operations.
For more on ADP, please go to our website and reach out to us on LinkedIn.