The term “datafication” was coined to describe the consequences of the digital revolution: the growing generation, collection and storage of digital information concerning all aspects of the physical world (including earth activity, weather, climate, and biosphere), human lives and activities (including DNA, vital signs, consumption, and credit scores), and societal patterns (including communications, economic activity, and mobility).
The datafication of the world is fueled by the automatic generation of data through the billions of digital devices that surround us: cellphones and tablets, e-devices, security cameras, credit cards, badges and satellites. However, little of the generated information is actually used to improve people’s lives or in the design of better public policies.
Data, the commodity of the 21st century
The flow of information resulting from the data deluge is mainly stored within data centers, as a commodity, typically legally owned by the private companies collecting them — telecom operators, social media companies, or banks, among others.
These data are analyzed for internal and commercial purposes — think of how Amazon or Facebook operates, for example — and hold tremendous fiduciary value. Companies whose investments, innovations and systems contribute to generating and storing these data cannot simply surrender it.
But many private companies do not realize the public good value of these data — including how they could benefit from opening up “some” of their data if it helps to grow economies or prevent epidemics. Even when they do, they face not only commercial but also ethical and legal incentives not to open their data further. Indeed, not all data should be open. Personal data collected through our usage of social networks, our mobile phone activities, sensors and connected devices all inform pretty accurately our way of life: our location, whether real time or historical; the people we communicate with; the content of our private messages or emails; our heart rate, or even our most intimate feelings — we wouldn’t like such information to be publicly available.
Meanwhile, the case for opening and using data has become clearer in recent years. First, the “open data movement” has shown how opening up data could foster public innovation, foster civic engagement, accountability and transparency. Second, a handful of companies — chief of which telecom operators, including Orange, Telefónica and Telecom Italia — have experimented with “data challenges,” whereby some data are made available to researchers in a tightly controlled manner — making them difficult to scale.
The successes and results of these challenges revealed and stirred up growing demands for more “private” data to be made available. Some, such as Kenneth Cukier, the Economist's senior editor for data and digital, even consider that not using these data is “the moral equivalent of burning books.” But the dilemma remained between privacy and utility; between commercial, individual and societal considerations, and so on.
Which data should be accessed, for what and by whom?
The Open Algorithm project: Developing indicators, capacity and trust
To address the complex challenge of data access, Orange, MIT Media Lab, Data-Pop Alliance, Imperial College London and the World Economic Forum — supported by Agence Française de Développement and the World Bank — are developing a platform to unleash the power of “big data” held by private companies for public good in a privacy preserving, commercially sensible, stable, scalable and sustainable manner.
In its initial phase of deployment, the Open Algorithm project, or OPAL, will focus on a small set of countries in Latin America, Africa and Asia, with technical support from a wide range of partners including Paris21, Microsoft, and Deloitte Consulting LLP.
OPAL’s core will consist of an open technology platform and open algorithms running directly on the servers of partner companies, behind their firewalls, to extract key development indicators of relevance for a wide range of potential users, including national statistical offices, ministries, civil society organizations, media organizations, etc. Examples of potential indicators and maps produced with greater frequency and levels of geographic granularity currently available include poverty, literacy, population density, social cohesion — all on which the literature has shown that “big data” analysis could shed light.
Two recent developments further complicate the debate. One is the finding that “anonymizing” data was much harder than previously thought — the uniqueness of our behaviors and the interconnectedness of datasets in which we appear, makes “reidentifiability” possible. This all but rules out the option of “simply” releasing personal data without personally identifiable information as a long-term solution.
Another development was the “Facebook emotion study,” where the social media giant used data and manipulated the newsfeeds of hundreds of thousands of users as part of an experiment that was perfectly legal but deemed unethical — putting the notion of what “informed consent” meant and entailed back at the forefront of these debates.
The concern that algorithms operate as “black boxes” that could embed and help entrench biases and discriminations has also gained ground. And the pressures to use these data to improve people’s lives will most likely continue growing — including in support of the Sustainable Development Goals — alongside people’s demand to have greater control over this use — in ways that respect individual and group privacies, commercial interests, and of course prevailing legal standards.
Leverage and strengthen public-private-people partnerships and local ecosystems
As a “platform” to unleash the power of these “big data” held by private companies for public good, the AFD-supported OPAL initiative has three key aims:
1. To engage with data providers, users and analysts at all stages of its development, including during the development of algorithms.
2. Contribute to building local capacities, connections and help shape the future technological, political, ethical and legal frameworks that will govern the local collection, control and use of “big data” to foster social progress.
By “sending the code to the data” rather than the other way around, OPAL seeks to address these challenges and spur dialogues and develop data services on the basis of greater trust between all parties involved — including between citizens, official statistical systems and private corporations.
3. To build data literacy among users and partners; not just data literacy defined as “the ability to use data,” but conceptualized in a broader and deeper sense as literacy in the age of data and defined as “the ability to constructively engage in society through and about data.” In the 21st century, being “data literate” in that sense will be as much a fundamental human capability as a useful professional skill set; both an enabler and marker of human agency.
Mass data literacy will be as essential to development and democracy as mass literacy has been during the 20th century. Building this kind of data literacy across institutions and groups will require large-scale sustained initiatives and investments that have not yet materialized.
How can data be used for social good? Share your perspective by leaving a comment below.
With potential to change the trajectory of crises, such as famines or the spread of diseases, the innovative use of data will drive a new era for global development. Throughout this monthlong Data Driven discussion, Devex and partners — the Agence Française de Développement, BroadReach, Chemonics and Johnson & Johnson — will explore how the data revolution is changing our approach to achieving development outcomes and reshaping the future of our industry. Help us drive the conversation forward by tagging #DataDriven and @devex.
Thomas Roca is a researcher and statistician at the French Development Agency. Thomas is developing AFD’s research program covering well-being, human development and alternative welfare indicators, including Big Data for Development. Thomas’s field of work covers also data visualization and programming and he is developing AFD’s data visualization Web portal called AFD Country Dashboard.
Emmanuel Letouzé is the director and co-founder of Data-Pop Alliance, a global coalition on big data and development created in 2014 by the Harvard Humanitarian Initiative, MIT Media Lab and Overseas Development Institute, joined by the Flowminder Foundation in 2016. He is a visiting scholar at MIT Media Lab and the author of U.N. Global Pulse's White Paper "Big Data for Development" (2012) where he worked as senior development economist. He previously worked as an economist for U.N. Development Program in New York and in Vietnam for the French Ministry of Finance. He holds a B.A. in political science and an M.A. in economic demography from Sciences Po Paris, an M.A. in international affairs from Columbia University, and a Ph.D. in demography from UC Berkeley. He is also a political cartoonist for various media.
Subscribe to Devex Newswire
Top international development headlines emailed to you every day