Q&A: Biometric technologies in public — a security asset or a threat?

A surveillance camera. Photo by: Erik Mclean / Pexels

“When you combine the will to control people — using public security as a justification — with surveillance technologies, you get a melting pot of racism, penal actions, and incarceration,” said Paulo José Lara, head of digital rights programs in Brazil and South America at ARTICLE 19, an international human rights organization working to protect freedom of expression globally.

Brazil positioned itself as a pioneer within data governance in 2014 through the passing of the Civil Rights Framework, often called the world’s first “internet bill of rights” and even a “constitution for the internet” said to protect free expression, net neutrality, and the right to privacy and connectivity, according to the World Wide Web Foundation.

But current practices and uses of digital technologies and artificial intelligence have been criticized as being more about control than about ensuring freedoms. For example, a “fake news” bill was approved in 2020 seriously undermining internet freedom, including the criminalization of certain content and obliging private-messaging apps to retain communications records. In recent years, the country has witnessed a rapid increase in public surveillance, the use of artificial intelligence systems for identification purposes, and disinformation.

While biometric technologies such as face and voice recognition are often seen as key to ensuring security in so-called smart cities — the technologies can also jeopardize data protection, human rights, and freedom of expression, explained Lara.

Devex spoke to him about the human rights and data protection risks that come with the deployment of biometric technology in cities, as well as surveillance, and the use of AI for identification purposes.

This conversation has been edited for length and clarity.

Many smart city innovations rely on biometric data. Why was it important to ban the deployment of facial recognition in trains and stations in São Paulo?

The first thing that we have to address and understand concerning biometric technologies, and facial recognition, in particular, is the ideology behind the science. Looking back, profiling has been racist and a colonial endeavor used to classify people. While some think that classifying populations makes them safer, it’s an abstract danger and something that needs to be addressed.

The ones who suffer the most from these technologies are populations that don’t know much about ways to protect themselves from violations of privacy and against a state that wants to control the population. It's a problem that hits poor and marginalized populations. It's a social problem and an economic issue.

In Brazil, we’ve seen lots of attempts to deploy not only facial recognition but emotional recognition as well in public transportation sites. Together with other civil society organizations, we carried out a successful action to ban emotional recognition in the São Paulo metro.

The proposal was to implement 2,000 cameras all over the grid of trains and metro stations. At this moment, implementation is suspended, and we hope for good. It’s not only about security in those sites, it's about violating manifestation in the civic space. And we do have data that affirm that public transport sites are not even where most crimes happen.

What risks does this type of surveillance technology pose to citizens, in particular vulnerable and marginalized groups?

One of the problems related to violations of human rights and freedom of expression that surveillance technologies can have is that they can enhance the so-called chilling effect. When people know that they are monitored they do not feel comfortable acting in a natural way, which violates the principle of freedom of expression.

Facial recognition technology also has a high margin of error and research demonstrates that the errors tend to target women and the Black population. If those technologies recognize someone wrongly, we risk putting innocent people in jail.

Another example is the public Wi-Fi connection offered by the government — often used by people without private access to the internet. Using public systems, they are at higher risk to be monitored than people that have money for a private plan.

Watch this conversation with ARTICLE 19’s Paulo José Lara. Via YouTube.

From your work around the use of AI and other smart city technologies in Brazil, what lessons have you learned that you can share with other countries in the region?

In our partnership with [Instituto] Lapin and INTERNETLAB we created a smart city guide for public and private players around data protection, procurement, and the adoption of potential privacy-violating technologies. This was a very interesting project that brought together civil society, researchers, and policymakers. We engaged in the discussion related to the implementation of certain technologies, including how cities should deal with personal data and privacy when engaging in partnerships with private companies. We need to know the safeguards for human rights and freedom of expression in these procurements.

One of the key things when implementing those technologies is to foster the use of open source technology whenever possible. We need to be aware that foreign and multinational companies are entering low- and middle-income countries for data gathering purposes. We need to ensure full transparency from contracting to implementation and to know if the technologies can violate human rights or interfere with freedom of expression.

We also need multistakeholder cooperation to profit from the use of these technologies. We should take advantage of the many researchers and universities working on the topic, as well as the vibrant civil society that’s engaged in the matter.

How has Jair Bolsonaro’s presidency affected internet freedom and digital rights in Brazil?

Since the early 2000s, Brazil has been a protagonist of internet and technological policies. We have the civil rights framework, which is a model for the whole world. We also implemented digital culture programs, giving tools to communities to work with digital matters. But most of the good programs on internet connectivity, digital culture, internet rights, and so on, have been halted by Bolsonaro’s government. It’s a shame, given Brazil was a protagonist, and now we are behind many other countries.

“The ones who suffer the most from these technologies are populations that don’t know much about ways to protect themselves from violations of privacy and against a state that wants to control the population.”

— Paulo José Lara, head of digital rights programs in Brazil and South America at ARTICLE 19

We also see very worrying actions by this government. It’s important to highlight that federal governments have been trying to purchase monitoring systems that can [gain access to] peoples’ devices. The government is also providing armed forces and military police with spy technologies, monitoring technologies, and surveillance technologies. It’s a dangerous threat to civil liberties and society in general.

Then there were recent attempts to modify the civil rights framework for the internet in Brazil, [with the aim of] enabling disinformation campaigns to be spread during the election year. Bolsonaro clearly has authoritarian inclinations, which also affects digital technology and digital rights.

Do you see a need for global regulation of AI and surveillance technologies, and would it be feasible?

I think some of the regulations and legislations around technological systems can be good to use as a reference for different countries to implement and to study. What is fundamental and universal, and something every legislation has to abide by is the respect for human rights. But there should be some space for each country and each city to modify and adapt the regulations according to their social, economic, and cultural reality. How a certain kind of community wants to deal with technology should be up to them.

On the internet connectivity issue, for example, multinational corporations have been trying to offer one single solution to provide global internet access, including to poor and remote areas. Do we really have to accept that those multinational corporations are the only ones that have a solution to connectivity? Or can the communities discuss, implement, and even regulate their own technology? I think this is one of the challenges that the governance of the internet has to deal with in the next couple of decades.

Visit the Generation Why series for more coverage on how we can ensure the digital space advances the rights of all young people and leaves no one behind. You can join the conversation using the hashtags #DevexSeries on #DigitalRights.

Read more:

Opinion: The challenges of protecting data and rights in the metaverse

Opinion: Is data becoming the 21st century’s resource curse?

How vulnerable are NGOs to cyberattacks? (Pro)