The World Food Programme’s Feb. 5 announcement of a $45 million partnership with the algorithmic intelligence firm Palantir has sparked a firestorm of criticism. The partnership raises serious issues that must be addressed by WFP — who appear to be taking the right steps toward transparency. But this controversy is bigger than just one private sector agreement by one agency — and it should serve as a challenge and an opportunity for the entire humanitarian sector.
Humanitarians have become increasingly reliant on digital data, and on third-party partnerships to collect and process it to create operational impact. But the ecosystem for doing this responsibly is missing: the policies, procedures, and capacities that ensure core principles, human rights standards, and data regulations govern these partnerships.
This controversy is an opportunity for collective action to help humanitarians better protect vulnerable people, by building the robust partnerships it needs to serve more effectively.
More on data:
Humanitarians can do three things, in short order, to ensure that third-party data-related partnerships do not harm the physical safety, human rights, and individual and group privacy of the world’s most vulnerable people. These actions will require more than robust engagement by a diverse set of actors — they will also require essential cultural changes in how humanitarians approach data. Donors and humanitarian leaders must embrace the fact that being fit for purpose in a digital world requires being open to partnerships and rigorously ethical.
First, humanitarians must recognize and mitigate the potential harms they can cause through the use of digital technologies in humanitarian work. To do so, humanitarians must come together to formally acknowledge that the Humanitarian Principles, the Humanitarian Charter, and the Universal Declaration of Human Rights require them to respect the rights of vulnerable people to information, and to agency over their own data.
Additionally, the sector must commission new commentaries for how International Humanitarian Law applies to armed actors engage in cyber and information warfare, as well as how civilians and humanitarians must be protected from digital threats.
Second, adapting to a digital world requires collective regulation to enact these principles, coupled with the independent review of digital services. Agencies must improve the procedures that govern our digital partnerships. This includes scrutiny of end-user license agreements; may extend to developing status of data agreements modeled on the status of forces agreements that are used in peacekeeping; and could extend to monitoring mechanisms that pay attention to the challenge of digital protection.
Third, mistakes will be made. There will be risks that become harms. A sector-wide, transparent and independent critical data incident management process is urgently needed to manage and mitigate problems, and to help improve digital guidelines as we learn. Doing no harm is not possible unless we know what the harm is and could be. Developing critical incident mechanisms for data will also help develop a common theory of harm.
Humanitarians have taken some first steps to adapt to a digital world. However, the deep need for digital standards, investments in capacity, and powerful accountability mechanisms remain — but can be addressed in a way that continues to allow new partnerships and protects the people we serve. The investment in digital capacity, competencies, and capabilities must not be seen as just investing in the humanitarian system. Instead, they are the necessary steps to recognizing the opportunity of our digital world, and our moral obligation to recognize the reality that data is people.