Asress Adimi Gikay (PhD)
Senior Lecturer in AI, Disruptive Innovation, and Law
Photo credit: Irbsas, via Wikimedia commons
The Call for Ban on Real-Time Remote Biometric Identification System
It has been around two years since the European Commission introduced its Draft Artificial Intelligence Act ("EU-AIA), which aims to provide an overarching AI safety regulation in the region. The EU-AIA's risk-based approach has been severely criticised mainly for failing to take a fundamental rights approach to regulate AI systems. This post focuses on the EU-AIA's position on the use of Real-Time Remote Biometric Identification Systems (RT-RBIS) by law enforcement authorities in public spaces, which continues to cause the most controversy.
The EU-AIA defines RT-RBIS as a "system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay" [EU-AIA Art. 3(37)]. The regulation covers the real-time processing of a person's biological or physical characteristics, including facial and bodily features, living traits, and physiological and behavioural characteristics, through a digitally connected surveillance device. The most commonly known RT-RBIS is facial recognition technology (FRT)—a process by which an AI software identifies or recognises a person using their facial image or video. The software compares the individual's digital image captured by a camera to an existing biometric image to estimate the degree of similarity between two facial templates and identifies a match. In the case of real-time systems, capturing and comparing images occur almost instantaneously.
As EU institutions, Member States, and stakeholders continue to discuss the EU-AIA, there is growing dissent against the use of RT-RBIS for law enforcement purposes in publicly accessible spaces. In 2021, the European Parliament invited the Commission to consider a moratorium on the use of this technology by public authorities on premises meant for education and healthcare. In response to the EU Council's latest proposed revision of the EU-AIA, on October 17, 2022, 12 NGOs wrote a letter to the EU Council reiterating the need to prohibit the technology unconditionally.
The Risk Posed by the Technology
RT-RBIS poses multiple risks that might jeopardise individual rights and citizens’ overall welfare.
As the technology is still evolving, there remains the risk of inaccurate analysis and decisions made by the system. In the United States, police have used FRT to apprehend individuals suspected of a crime where multiple instances of mistaken identification led to wrongful arrests and pre-trial incarcerations. In one example, a Black American wrongly identified by a Non-Real Time FRT for suspicion of shoplifting, resisting an arrest and attempting to hit a police officer with a car spent eleven days in jail in New Jersey. Between January 2019 and April 2021, 228 wrongful arrests were reportedly made based on FRT in the State of New Jersey.
The deployment of RT-RBIS in public spaces could cause more significant harms compared to Non-Real time biometric identifications systems. These harms include missing flights, false arrests, and prolonged and distressing police interrogations that have adverse socio-economic and psychological effects on law-abiding members of society.
RT-RBIS could also be applied discriminatorily, disproportionately targeting specific groups. In a 2019 study, researchers have found that FRT falsely identifies "Black and Asian faces 10 to 100 times more often than white faces." False positives were found to be between "2 and 5 times higher for women than men." Whilst an ethical and inclusive machine learning programme could alleviate this, the potential for discriminatory application of the technology cannot be ignored. In the UK, the existing policing practice has been criticised for subjecting ethnic minorities to disproportionate stops and searches. Indeed, the police should not be allowed to use technology to maintain similar stereotypical practices.
Lastly, RT-RBIS could continue to normalise surveillance culture and increase the infrastructure for it. Public spaces such as airports, train stations, and parking lots could be equipped with cameras that law enforcement authorities could activate for live biometric identification in case of necessity. This could expose the public to the risk of state surveillance. The use of FRT to crack down on the exercise of democratic rights by authoritarian governments is becoming a common practice. Currently, there is an ongoing legal challenge against Russia before the European Human Rights Court for mass surveillance of protests using FRT.
The risks highlighted above must be addressed seriously and comprehensively. However, is a complete ban on the use of the technology a reasonable solution?
Qualified Prohibition and Fundamental Rights Approach under the EU AI Act
Due to the high risk to fundamental rights posed by some AI systems, scholars have argued that the EU-AIA should take a fundamental rights approach in regulating these AI systems. As fundamental rights are given strong legal protection, any measure that interferes with them should meet three legal requirements:
The burden of proving the necessity and proportionality of interfering with fundamental rights lies with the authority seeking to interfere with such rights.
A court or a similar independent body determines whether the authority has met the threshold of its burden of justification.
These requirements involve a careful judicial balancing act. The EU-AIA's qualified prohibition of using RT-RBIS effectively adopts the same approach.
First, the EU-AIA permits, by way of exception, the use of the technology for narrowly defined, specific, and legitimate purposes [EU-AIA Art. 5(1)(d)]. These purposes are, (i) the targeted searches for specific potential victims of crime, including missing children; (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a terrorist attack; and (iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a crime with a maximum sentence of at least three years that would allow for issuing a European Arrest Warrant. These are specific and legitimate purposes for restricting fundamental rights, depending on the context.
Second, the relevant law enforcement authority must demonstrate that the use of the technology is justifiable against: (a) the seriousness, probability and scale of the harm caused in the absence of the use of the technology; (b) the seriousness, probability and scale of consequences of the use of the technology for the rights and freedoms of all persons concerned; and (c) the compliance of the technology's use with necessary and proportionate safeguards and conditions in relation to the temporal, geographic and personal limitations[EU-AIA Art. 5(2)-3]. The authority proposing to use the technology bears the burden of justification.
Third, the relevant law enforcement authority must obtain prior express authorisation from a judicial or a recognised independent administrative body of the Member State in which the technology is to be used, issued upon a reasoned request. If duly justified by urgency, the police may apply for authorisation during or after use [EU-AIA Art. 5(3)].
The preceding analysis demonstrates that the EU-AIA does not give a blank cheque to the police to conduct spatially, temporally, and contextually unlimited surveillance. Despite the EU-AIA not explicitly employing fundamental rights language in the relevant provision, it entails a balancing act by courts, that must determine whether the use of RT-RBIS is necessary and proportionate to the purpose in question by considering multiple factors, including human rights.
The Call for Categorical Prohibition is Unsound
The fear of increasing surveillance is one of the grounds for the heightened call for the complete prohibition of RT-RBIS. Nevertheless, viewed within the overall context, the envisioned use of the RT-RBIS under the EU-AIA does not significantly change the existing surveillance culture or infrastructure.
Amid Corporate Surveillance Capitalism
Contemporary societies now live in massive corporate surveillance capitalism. Big Tech companies such as Facebook, Google, Twitter, Apple, Instagram, and many other businesses access our personal data effortlessly. They know almost everything about us— our location, addresses, phone numbers, private email conversations and messages, food preferences, financial conditions and other information we would prefer to keep confidential. Surveillance is the rule rather than the exception, and we have limited tools to protect ourselves from pervasive privacy intrusions.
Whilst surveillance, if employed by law enforcement, is used at least in theory to enhance public welfare, such as prosecuting criminals and delivering justice, Big Tech uses it to target us with advertisements or behavioural analysis. The fear of law enforcement's use of RT-RBIS in limited instances is inconsistent with our tolerance for Big Tech corporate surveillance. This does not mean we must sink further into surveillance culture, but we should not apply inconsistent policies and societal standards, detrimental to the beneficial use of the technology.
Minimal Change in Surveillance Infrastructure
The deployment of RT-RBIS as envisioned by the EU-AIA is unlikely to change the current surveillance infrastructure significantly, where Closed-Circuit Television (CCTV) cameras are pervasively present. In Germany, in 2021, there were an estimated 5.2 million CCTV Cameras, most facing publicly accessible spaces. In the UK, there are over five million surveillance cameras, over 691 000 of which are in London. On average, a London resident could be caught 300 times on CCTV cameras daily.
The police can access these data during the crime investigation, probably without needing a search warrant in practice. It is improbable that private CCTV camera owners refuse to provide access footage to the police due to a lack of a search warrant, unless they are involved in the crime or protecting others. At the same time, footage from these cameras play an instrumental role in solving serious crimes. However, the overall picture surveillance infrastructure would not significantly change; if it does, it is for a better public good.
Ethical Development and Use Guideline
The potential biases or disproportionate use of the technology against certain groups could be tackled by designing ethical standards for the development, deployment and use of AI systems. These guidelines include ensuring that the AI systems are bias-free before deployment and requiring law enforcement authorities to have clear, transparent and auditable ethical standards. The EU-AIA itself has several provisions to ensure this.
Maintaining the EU-AIA's Provisions on RT-RBIS
The use of RT-RBIS, as envisioned under the EU-AIA, does not fundamentally change the existing surveillance culture and infrastructure. Nor does it unreasonably increase the surveillance power of the state. On the contrary, a categorical ban would impede beneficial limited use. Therefore, the provisions of the EU-AIA governing the limited use of RT-RBIS by law enforcement authorities in publicly accessible spaces must be maintained. Stakeholders should resist the temptation to implement radical solutions that will harm societal interest, and focus on developing ethical guidelines for development, deployment and use of the technology.