An inside look at ICRC's AI exploration
We speak to Red Cross "techplomacy" delegate Philippe Marc Stoll about the AI tools the agency is working on.
By Rebecca L. Root // 01 December 2023From drones to document digitization, new algorithms to automated monitoring tools, the International Committee of the Red Cross is exploring a multitude of ways in which artificial intelligence might enhance its mission of protecting and assisting those caught in armed conflict. A year ago, it opened a space in Luxembourg for its cyberspace delegation to develop, test, and explore various forms of digital technology, including AI. Philippe Marc Stoll, an ICRC senior “techplomacy” delegate, described the center as a “sandbox” where “tech geeks” can trial various AI ideas. “The process is strict because we don’t want … to create new problems or harm for populations if we use it without having tested it in a safe space,” Stoll said. Rather than rushing to deploy, the aid organization is thoroughly testing interventions and, often, building its own algorithms rather than trusting those already in the public domain. “For us, one of the problems we face is the lack of transparency of the different solutions that exist. Companies don’t always want to share them with us. There’s also the risk of being locked into one of their systems” — getting stuck with it because it holds all their data — Stoll said. Instead, ICRC is working in partnership with various research institutes and universities on AI solutions. Speaking to Devex, Stoll lifted the lid on ICRC’s approach to AI and the innovations it’s exploring. This conversation has been edited for length and clarity. How is ICRC engaging with AI? On one side, there’s the question of how we can take advantage of these technologies to solve problems. Here, I'm very specific in trying to be problem-driven rather than solutions-oriented. We look at problems and [ask] if AI can help us or not, rather than having AI solutions [and then trying] to find a problem. On the other side, there’s internal reflection about what our policy around the use of AI should be. AI can bring many solutions, but what are the downsides of it? What are the limits? What are the ethical principles we should apply? The last one is the link to how it will impact conflict and the way it will create additional vulnerability [and] suffering on a population. … For us, there is the overarching element of the fundamental principles of … ICRC, such as humanity, neutrality, impartiality, and how this technology puts a pressure on these principles. … [For example], the biased nature of AI that exists in commercial spaces, where products are generally developed in the West with data sets that are not very transparent. … This is something we're a bit worried about. … Maybe we will exacerbate inequity or problems for populations that … are already on the margins. This is a tension with our principle of impartiality. Then we also see some tension on the question of perceived neutrality. … [For example], if we use certain tools, to what extent [do] we look even more Western than we are? We try to look at AI with this overarching element [of the ICRC principles], and then the applications, the policies, and the legal dimensions are defined according to that. What problems is ICRC currently looking to solve with AI? We’ve noticed some shortcomings in our capacity to process satellite images quickly in order to better map where vulnerable populations are [so that we can] bring in more precise and specific assistance. This is why we have tried to find a way to use algorithms to process important satellite images. We have also looked at how we can better protect privacy and discovered that we can benefit from privacy-preserving machine learning algorithms. Lastly, in order to improve search and access to our institutional memory, we are looking at developing specialized large language models, or LLMs [an AI program that can recognize and generate text.] What AI solutions has ICRC implemented thus far? We are not yet fully in the total implementation. There are always tests. One of them is drones [that are searching for landmines] with geothermal cameras and AI, which is being tested right now. … This is being done with Waseda University in Japan. The drones fly over larger spaces to identify metallic objects, and the camera can tell the difference between a bucket and a landmine. It’s allowing us to look at larger spaces, but also reduce the false positives, which was one of the problems we faced before. There is another one, which is [using AI to] try to understand patterns of violence [by armed forces and armed groups over time.] … It will help us observe changes in behavior, interrogating the reasons for why violence might have surged, dropped, or changed … [and] who or what might have been influential over that. … We had to build our own algorithm, our own database, so that when we have results, they are not so biased, or that we understand the bias because we master the whole chain and we understand where the data comes from. Then we also look at using AI to read satellite images. … It’s an automated monitoring tool that combines deep learning with open-source satellite images so we can monitor areas [being used by conflict parties] in almost real time. We are also digitizing our archives and right now we are using handwriting recognition tools to transfer handwritten reports so that they exist in a digital sphere and it’s easier. We are testing some tools and we own the whole process so I see this [as] overall internal knowledge [that] we can use toward working with populations. From your perspective, what is the most interesting AI intervention that ICRC is exploring? One of the most interesting projects is called Missing Persons Digital Matching. It facilitates a search of names across various databases and also different ways of writing names. With this tool, we have managed to reunite many people. What do you see as the future of AI in ICRC? There is quite a lot [that could happen], but again, there is this debate as to what extent it's going to help to have an internal chatbot, for example, or for documents to exist somewhere [digitally.] What are you most worried about? What I’m worried about is the growing use of AI in decision making, weapons, etc., with humans losing control over decisions in a conflict. I can make a small comparison: Autonomous cars are not yet on our roads but, according to what I read, they work. You have tests, people say there are even fewer accidents, but we haven’t solved the legal and ethical dimension yet. Weapons that are totally autonomous are already in the news [though not in use yet, or only in a very limited way.] It gives you a little bit of the scope of what worries us. What advice would you have for others in the sector embarking on an AI journey? Be sure that the populations you are trying to serve won’t be harmed by the technology you deploy. That’s the first principle. It’s looking from the people’s perspective, and not only by just saying “oh, what do they think?” but discussing and going directly to them.
From drones to document digitization, new algorithms to automated monitoring tools, the International Committee of the Red Cross is exploring a multitude of ways in which artificial intelligence might enhance its mission of protecting and assisting those caught in armed conflict.
A year ago, it opened a space in Luxembourg for its cyberspace delegation to develop, test, and explore various forms of digital technology, including AI. Philippe Marc Stoll, an ICRC senior “techplomacy” delegate, described the center as a “sandbox” where “tech geeks” can trial various AI ideas.
“The process is strict because we don’t want … to create new problems or harm for populations if we use it without having tested it in a safe space,” Stoll said.
This story is forDevex Promembers
Unlock this story now with a 15-day free trial of Devex Pro.
With a Devex Pro subscription you'll get access to deeper analysis and exclusive insights from our reporters and analysts.
Start my free trialRequest a group subscription Printing articles to share with others is a breach of our terms and conditions and copyright policy. Please use the sharing options on the left side of the article. Devex Pro members may share up to 10 articles per month using the Pro share tool ( ).
Rebecca L. Root is a freelance reporter for Devex based in Bangkok. Previously senior associate & reporter, she produced news stories, video, and podcasts as well as partnership content. She has a background in finance, travel, and global development journalism and has written for a variety of publications while living and working in Bangkok, New York, London, and Barcelona.