Gates Foundation's global health tech chief on the future of AI use
Devex sits down with Zameer Brey, lead for technology diffusion in global health at the Gates Foundation, for an in-depth discussion about the potential for creating more equitable AI tools and solutions for global development.
By Stephanie Beasley // 09 June 2023More and more, the philanthropy sector is examining how it can integrate artificial intelligence into its daily operations and grantmaking. It’s probably unsurprising that the Bill & Melinda Gates Foundation is among those taking the lead in this area. Its namesake co-chairs spent decades of their lives advancing new technologies at Microsoft, where Bill Gates was co-founder and CEO until 2000 and Melinda French Gates was once a product developer and manager. “The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone,” Gates wrote in a blog post earlier this year. “It will change the way people work, learn, travel, get health care, and communicate with each other.” “Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it,” he added. The Gates Foundation also is reorienting its work around new AI tools and solutions, particularly within some of its global causes, such as health care access in low- and middle-income countries and efforts to fight climate change. It has said it wants to help ensure that AI is used equitably and ethically around the world. As part of this focus on AI, the Gates Foundation has tapped Zameer Brey as the first head of technology diffusion in its global health division. He leads the organization’s approach to promoting the equitable and safe use of AI to advance equity as well as global health and development outcomes. He also oversees the foundation’s AI-related “Grand Challenges” grant opportunities, and the establishment of the external AI advisory board and guiding principles, among other initiatives. Last month, the Gates Foundation launched a request for proposals under its Grand Challenges initiative to encourage equitable AI use and ensure low- and middle-income countries “are included in the co-creation process” for AI tools and approaches. This includes large language models like ChatGPT that understand and generate text in a human-like manner. More than 1,300 proposals were submitted from nearly 50 countries before the June 5 deadline, the foundation told Devex. Each winning project will receive a $100,000 grant out of a total $3 million budget. Internally, the foundation recently established a task force to “drive a responsible and organized approach to exploring the foundation’s engagement with AI usage” and is creating an AI ethics and equity advisory panel with external experts. Further, the foundation has drawn up an initial set of guiding principles for its work with AI, including that it regularly conducts privacy and security assessments. “We recognize the inherent risks involved with AI and the need to address them responsibly,” foundation CEO Mark Suzman said in a statement about the AI principles. Brey, the new technology diffusion lead, was the foundation’s tuberculosis program lead in South Africa prior to switching to his new role just a few weeks ago. He has been so busy that he hadn’t had time to update his LinkedIn profile when Devex caught up with him earlier this month. This transcript has been edited for length and clarity. First, can you tell me about your new role? What exactly does a tech diffusion lead do? This role is really trying to put a focus on making sure we understand the challenges and the priorities in underresourced communities, vulnerable communities, and match that with a set of capabilities that technologies offer. It's really starting by giving voice and agency to where the biggest challenges are and then trying to figure out in the solution set, in the set of tools that the foundation is investing in and may continue to invest in, how do we see the connection. Put very simply, I just join dots. I have a clinical background, have worked in clinical situations, but also in public health and development settings where we see how when technology is not developed in an intentional way and is not context appropriate, it deepens inequity, and it leaves the poor behind. And even with the best intentions, those technologies just don't benefit the intended recipients. How would you characterize the foundation’s approach to AI use, both internally with your partners and in your grantmaking? I think we’re learning fast, and we're learning forward. The starting premise for the work and a set of leadership-level discussions around what should be a leading edge for the Gates Foundation engagement in AI … around equity is how do we make sure that this is not another technology that doesn't take into account the needs of the most vulnerable, that denies access to individuals that are economically disadvantaged, have poor access to health care, etc. A lot of the discussion that we've been having is how do we enable access and really use this as a powerful, as a sharp tool to reduce inequity? I think the other discussion that is lively, and I think going to be like a golden thread that we weave throughout our work, is how do we do this in a way that's responsible? How do we do this in a way that is safe, that is ethical, that is transparent, that is reliable, that is accurate? And so as we generate some evidence of where we can use this in an effective way, … [we can] be sure that we're doing it in a way that doesn't perpetuate bias, that ensures the recipients of the tools of the methods are really protected. I think there's a new set of challenges, but we’re embracing the potential of the tool in a way that's rational and safe. To that effect, we have set up an internal group that focuses on responsible approaches to artificial intelligence. We're in the process right now of establishing an external advisory board on AI with a set of global experts. I think it's taking us in the right direction. One quick thing to add is that a lot of energy this year is on leveraging large language models [or LLMs]. But the foundation has had a series of investments in artificial intelligence more broadly on imaging, on understanding how we could use this for ultrasound for maternal and neonatal child health and a couple of others. But, for obvious reasons, just given the proliferation of new and very powerful LLMs, this has certainly piqued our interest and that of our partners. In its recent request for proposals, the foundation repeatedly states that it is seeking projects that would advance “safe approaches” to the use of LLMs. Could you elaborate on what that means? We made the call quite broad. So, it wasn't focused on a very specific health case or focus, because part of what we wanted to do was ensure that communities, innovators, researchers had agency to select the priorities that they felt were most appropriate in their geography. But some of the conditions of making the grants are that the innovators or folks handing in proposals must have given thought to how safe is the approach, how ethical is the approach, can we ensure that all of the things that should be in place for a proof of concept are in place, that we're not compromising individuals’ privacy or doing something that would infringe a preexisting bias? The kinds of things we are seeing LLMs being particularly powerful for are the ability to reduce administrative burden, such as individuals using this to generate first draft documents of concepts. And also identifying in a given administrative process that there is a lot of repetitive work that's being done that could be supported by an LLM. I think the other big bucket — and there are four or five big buckets — is decreasing the time from evidence generation to that being transformed into policy into implementation and then ultimately having an impact. It’s to identify the opportunities in the entire value chain. That means that there's a more direct and faster link between new evidence becoming available and that benefiting the communities that that [it] was intended for. The third one is around analytical capability. Often, governments are constrained by the time and effort and skills required to do analysis on their own budgets on their own strategies. There is now a unique opportunity to leverage LLMs to foster and generate some unique insights. Part of that is also to optimize resource allocation. When a government is making a decision about whether to put 2% of its budget into commodities for tuberculosis, or 2% of that budget toward transport, it can do that in an informed evidence-based way. The last bucket is around communications and advocacy within civil society partners, community organizations and governments to develop sharp communication tools, approaches, etc. So some examples that we've seen are where you have large amounts of data that can be ingested into an LLM and provide very unique outputs, insights. Can a smallholder farmer in a rural part of Rwanda or Kenya take a photo of a crop that's been infested by a parasite or pest, have that translated into text ingested by the LLM analyzed, the pest identified, an action plan developed, guidance to the smallholder farmer in terms of where to go to seek to get the appropriate resources to counteract the pest, and then a follow-up plan? Are these things that you are already seeing in lower- and middle-income countries? I think that what is happening now is that there is some level of experimentation. Innovators are energized. What we intended with this RFP [request for proposal] was to provide funding to generate evidence in a systematic way. What we’re trying to do is respond to partners who are saying, “What should we do? What shouldn't we do?” by providing an opportunity to generate evidence. What’s next? There is a lot of hype in the space. We understand that. But we want to be able to galvanize the positive momentum and lead with evidence. We see this as one step to unlocking the creative potential that partners on the ground are expressing. But it's going to be the first step of many to come.
More and more, the philanthropy sector is examining how it can integrate artificial intelligence into its daily operations and grantmaking.
It’s probably unsurprising that the Bill & Melinda Gates Foundation is among those taking the lead in this area. Its namesake co-chairs spent decades of their lives advancing new technologies at Microsoft, where Bill Gates was co-founder and CEO until 2000 and Melinda French Gates was once a product developer and manager.
“The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone,” Gates wrote in a blog post earlier this year. “It will change the way people work, learn, travel, get health care, and communicate with each other.”
This story is forDevex Promembers
Unlock this story now with a 15-day free trial of Devex Pro.
With a Devex Pro subscription you'll get access to deeper analysis and exclusive insights from our reporters and analysts.
Start my free trialRequest a group subscription Printing articles to share with others is a breach of our terms and conditions and copyright policy. Please use the sharing options on the left side of the article. Devex Pro members may share up to 10 articles per month using the Pro share tool ( ).
Stephanie Beasley is a Senior Reporter at Devex, where she covers global philanthropy with a focus on regulations and policy. She is an alumna of the UC Berkeley Graduate School of Journalism and Oberlin College and has a background in Latin American studies. She previously covered transportation security at POLITICO.