Asress Adimi
Gikay (PhD)
Senior Lecturer in
AI, Disruptive Innovation, and Law at Brunel University London
Photo credit: Dirk
Ingo Franke, via Wikimedia
Commons
Divergent
Approaches to Regulating Live Facial Recognition
In what is characterised as an 'Orwellian Nightmare', UK’s Minister of Crime, Rt Hon Chris Philp MP recently suggested the opportunity for the UK police to search national passport database using facial recognition technology to tackle shoplifting crimes. This occurred as preparations were underway for the AI Safety Summit, an event that took place in London. It has some irony, to the surveillance anxious participants who gathered to discuss AI Safety as London is one of the cities with the largest number of CCTV cameras; but importantly where the police frequently use live facial recognition (LFR) in public spaces.
Since South Wales
Police made the first
arrest using LFR over six years ago, UK Police use the technology to locate
criminal suspects from crowds. In LFR,
the artificial intelligence(AI) software compares, in real-time, biometric
facial images captured by a camera with existing
facial templates of persons of interest
in a police-created database known as ‘watchlist’.
This is contrasted with retrospective
facial recognition system where the facial recognition takes place in the
absence of the person of interest based on a video or still image taken from a
source (also known as post-system). The UK government has called up on police
force to expand the use of the technology, amidst growing concerns that the
technology could endanger
civil liberties.
By contrast, the European Union’s (EU) upcoming AI Act, in the Parliament’s Compromise Amendments
categorically bans the use of LFR. However, the ‘trialogue negotiations’
seem to have led to a compromise where the use of the technology is permitted
for specifically listed crimes punishable by at least five years. The
EU’s restrictive position seems intended
primarily to appease civil society organisations, 12
of which wrote a letter to the EU Council in 2022 reiterating the need to
prohibit the technology in categorical terms. Meanwhile, despite the recent call
from 61 MPs and 31 civil society organisations demanding the immediate
cessation of the use of the technology by UK police and private companies,
efforts to stop the technology have remained unsuccessful in the UK.
In this post,
explain why the UK’s approach should inform the regulation of LFR in the EU, by
using evidences from the use of the technology by the UK police. I also
introduce the theory of incrementalism, a normative framework for regulating
novel technologies posing evolving risks whose magnitudes are yet to be known,
in my forthcoming Cambridge
Law Journal Article—‘Regulating Use by Law Enforcement Authorities of Live
Facial Recognition Technology in Public Spaces: An Incremental Approach.’
Why the EU’S Approach Should Take an Incremental Approach
Incrementalism
calls for regulating the use of the LFR technology by the police, and by
extension similar technologies with novel advantages and risks, through
progressive adjustment of the existing legal framework in the light of the
potential risks and evidence of actual harm. This is different from a
regulatory framework that responds to the risk of harm assessed in abstract
terms, without considering the context of actual application of the technology,
existing safeguards as well as the overall benefit of the technology.
I propose
this theory to incorporate four main ingredients: sectoralism; reliance on
existing legal frameworks; evidence-based regulation; and flexibility. This post
will only explain evidence-based regulation as one of the important elements of
the theory. The UK’s prevailing approach to regulating LFR and AI in general
reflects certain elements of incrementalism. A measured regulation of facial
recognition technology in the EU requires adopting this theory in its entirety
or partially.
Evidence-Based
Regulation
The position of the
EU on LFR does not appear to be based on thorough assessment of the benefits
and risks of the harm of the technology as well as public support and the
ability of law enforcement authorities to use it in proportionate manner. Indeed, these are important factors in
choosing the appropriate response to a new regulatory phenomenon. The
experience in the UK provides an excellent insight into understanding the
issue.
Evidence of Public
Support and Benefits of the Technology
In a 2019 UK
national survey conducted by Ada Lovelace Institute, 70%
of respondents thought police should be permitted to use facial recognition
in criminal investigation, with 71%
supporting its use on public spaces, if it helps reduce crime. This
positive public view aligns with the existing evidence of the effectiveness of
the technology in tackling crimes. In 2020 and 2022, the London Metropolitan
Police Service identified
nine suspects in eight live facial recognition deployments. Earlier in
2018, the technology assisted South Wales Police in, reportedly, making 450
arrests. Several deployments in the
UK have recently shown the effectiveness of the technology in helping arresting
people suspected of committing violent crimes.
In October 2023,
the Metropolitan Police identified 149
suspects of retail crimes using retrospective
facial recognition. They compared hundreds
of CCTV still images provided by retail businesses of their ‘prolific
retail offenders’ against custody images. The result is significant as
retail business are crucial to the UK economy creating a job for one
in ten Londoners. Additionally, these crimes lead to the loss of estimated £1.9
billion in revenue whilst involving rampant
abuse of retail workers.
With the technology
garnering some public support, campaigners struggle to present a persuasive
evidence of harm of using it to back their push for blanket prohibition or
suspension in the UK. As EU members states have not allowed the technology to
be used, it is impossible to understand if the technology actually causes
harm. Generally, advocacy groups highlight the inaccuracy of face recognition
systems, especially in identifying women
of colour, with Big Brother Watch claiming that Met and South Wales Police
facial recognition systems are over 89%
inaccurate. However the
National Physical Laboratory independently tested two facial recognition
systems used by the UK police in 2022. The result showed the software
underperformed the most on Black-Female faces, but the discrepancy in
accuracy rates across demographics was found to be statistically
insignificant. Equally importantly, the inaccuracy of the technology does
not inevitably translate into harm due to the existing legal safeguards the UK
police adhere to, safeguards that exist or can be implemented in the EU.
Evidence of Safe
and Proportionate Use
Despite the
technology seeming to be unquestionably inaccurate, there is no reported case
of serious harm resulting from the use of the technology in the UK because the
UK police use it safely and proportionately. This can be contrasted with the
US, where troubling incidents of wrongful arrests using facial recognition
systems have been documented. For instance, Nijeer
Parks, a Black American misidentified by facial recognition was wrongfully
incarcerated for ten days. According to a civil
complaint against the Director of Woodridge Police and others, Nijeer Parks
voluntarily visited the police station to clear his name upon learning of an
arrest warrant issued for him, in what he believed to be a case of mistaken
identity. The police subjected him to coercive
interrogations and solitary confinement, to secure his confession, whilst
ignoring his alibi and the mismatch between DNA and fingerprints found at the
crime scene with those of Nijeer Parks’.
These kinds of
incidents seem to reflect more of police misconduct than the inherent challenge
posed by the technology. Despite the concerns in the
US, in 2021, not less than seventeen state legislatures
rejected bills to ban facial recognition. Some states including New Orleans and Virginia, that had previously
banned facial recognition have now reversed course,
to allow its regulated use. Legislatures
seem to want legal frameworks that strike a balance between benefits of the
technology and addressing its risks.
Such a legal
framework largely exists in the UK and the EU, as the two jurisdictions have
similar human rights regimes. The UK police are obligated to use facial
recognition technology in compliance with the Human Rights Act and the Equality
Act, the latter imposing equality
impact assessment obligation. Equality impact assessment requires the
police to proactively tackle the potential discriminatory impact of the
technology on specific groups and implement risk mitigation measures. These
measures are supplemented by privacy law under the European Human Rights
Convention and data protection rules under the Law Enforcement Directive
(applicable both in the EU and the UK).
In the UK, the
police also adhere to a national document prescribing detailed procedures for
deploying live facial recognition technology known as the authorised
professional practice. This national code of practice has a binding
force. As a result of these comprehensive safety frameworks, the UK police
have used facial recognition technology for seven-years
without a single instance
of wrongful arrest or abuse. As the European Union has advanced legal
frameworks on human rights, privacy, data protection, and rule of law in general,
it is inconceivable that the result of using facial recognition technology in
the EU would be different from that of the UK.
Advocacy groups also
highlight privacy intrusion and the expansion of surveillance as further
concerns in relation to the use of LFR in public spaces. Nevertheless,
these are addressed by
legally limiting the duration, purpose and context of the use of the
technology in the UK. The police are required to assess
the proportionality of using the technology, especially in places where it
could have serious privacy implications such as hospitals and schools. This is
mainly because article
8 of the European Human Rights Convention, similarly to its EU counterparts,
articles
7, and 52
of the Charter of Fundamental Rights allow interference with privacy right
based on the assessment of proportionality and necessity. The existing legal framework in the UK does
not permit surveillance at will. Things are not different in the EU, and in any
event, a loophole that creates a room
for excessive surveillance could be addressed by a legislation.
Additionally, the
face recognition software used by the police does not
retain biometric data of individuals unless
positively matched, and personal data
generated during its use is automatically deleted within a short span of
time. These are intentionally built-in features of the software aimed at
lessening the privacy and data protection impacts of the technology. Last, LFR
is currently used in public spaces where people are unlikely to engage in
private activities that should outweigh the public's interest in tackling
violent crimes, although again, specific deployments need to observe the
requirements of necessity and proportionality.
Academics and
advocacy groups often express doubt about the clarity of the legal basis for
using facial recognition or the existence of mechanisms for redress for harms
caused by the use of the technology. But this is not based on sound legal
analysis. The
current law does allow the police to gather information for fighting crimes
including using new technological tools.
Furthermore, the police can be liable to pay compensation for harms, if they wrongfully detain or
interrogate someone following misidentification using facial recognition
technology under civil liability law. This is not to suggest that the existing
law is without any loophole, but that addressing any legal gap requires
delicate balancing rather than unnecessarily restrictive measures.
The Need to
Reverse Course in the EU
The EU Commission’s
Initial Draft of the AI Act permitted the limited use of LFR for law
enforcement purpose. First, by way of exception it allowed the use of the
technology for narrowly defined, specific, and legitimate purposes [Art.
5(1)(d)]. These purposes are:
(i) the targeted
searches for specific potential victims of crime, including missing children;
(ii) the prevention
of a specific, substantial and imminent threat to the life or physical safety
of natural persons or a terrorist attack; and
(iii) the
detection, localisation, identification or prosecution of a perpetrator or
suspect of a crime with a maximum sentence of at least three years that would
allow for issuing a European Arrest Warrant.
Second, the relevant law enforcement authority must demonstrate that the use of the technology is justifiable against (a) the seriousness, probability and scale of the harm caused in the absence of the use of the technology;(b) the seriousness, probability and scale of consequences of the use of the technology for the rights and freedoms of all persons concerned; and (c) the compliance of the technology's use with necessary and proportionate safeguards and conditions in relation to the temporal, geographic and personal limitations[Art. 5(2)-3]. The authority proposing to use the technology bears the burden of justification.
Third, the relevant law enforcement authority must obtain prior express authorisation from a judicial or a recognised independent administrative body of the Member State in which the technology is to be used, issued upon a reasoned request. If duly justified by urgency, the police may apply for authorisation during or after use [Art. 5(3)].
The Parliament’s Compromise Amendments categorically banned LFR used either by private companies or law enforcement authorities. As this post has demonstrated, the evidence in the UK as well the recent reversal of bans in the US clearly indicate that the EU’s position is not based on concrete evidence. As mentioned earlier, the ‘trialogue negotiations’ seem to have led to a compromise where the use of the technology is permitted for specific crimes punishable by at least five years. This is unnecessarily restrictive as host of crimes including money laundering, financial fraud and other offences are not envisioned to be among the crimes for which the technology can be used.
A Call for Measured Regulation
The thinking behind
the EU’s approach seems to be highly influenced by campaigners who depict the
use of the technology as 'Orwellian'
to induce public fear, regardless of the
context in which it is used. It appears to be a knee-jerk reaction, rather than
an evidence-based response. The UK’s current practice and legal framework
certainly have some
loopholes to close. For instance, the technology could potentially be used for all
crimes today, regardless of the seriousness of the crimes in question. But addressing
these kinds of details does not entail suspending the use of the technology or
entirely prohibiting it. Neither does it requires unnecessarily restricting it.
Any legislative effort that aims to strike a delicate balance between the
societal benefits and risks of the technology should take an incremental
approach, that allows for timely response to evolving risk based on actual
evidence of harm than conjecture. Starting with the strictest regulatory
framework, uninformed by evidence could needlessly deny society of the benefits
of the technology.
No comments:
Post a Comment