Asress Adimi Gikay (PhD) Senior Lecturer in AI, Disruptive
Innovation and Law, Brunel University of
London
Photo credit: Abyssus, via Wikimedia commons
Live facial recognition on the rise
Live facial recognition (LFR), is quickly gaining ground across
Europe, with countries like Germany having used it to
target serious criminal offences. The technology scans people’s faces in real
time and matches them against police watchlists (e.g., people suspected of
committing serious crimes). The EU’s Artificial Intelligence(AI) Act, allows
police in member states to use LFR for serious crimes such as terrorism. However, the
implementation of the EU AI Act in member states will likely face challenges as
technical issues such as accuracy and legal boundaries are yet to be adequately
tested.
Meanwhile, the UK Metropolitan Police have gained an extensive
experience in managing the risk posed by the technology, arresting more than 1,000 people between January 2024
and August 2025. In August 2025, despite
opposition from 11 civil
liberty groups, the Metropolitan deployed LFR at the Europe’s largest street
festival celebrating African-Caribbean culture, Notting Hill Carnival, making 61 arrests.
The Metropolitan Police have taken the most step to address one of the
biggest challenges in the use of the technology, i.e., ethnic bias. However, a controversy
remains as to whether ethnic bias has been adequately tackled with data being
interpreted differently to support the specific narrative being advanced.
Misconception or misframing of critical notions in the field surveillance also shape
public perception and could potentially inform policy and regulatory choices
that are not necessarily evidence based. I believe the prevailing positions adopted by academics
and civil society groups also partly reflect such a state of affairs— selective
use of data, unwarranted anxiety about surveillance and misconceptions around core
legal concepts.
The view predominantly
advanced today by academics and civil liberty groups is a proposal for banning or imposing moratorium on the use of LFR on
the ground that it is inaccurate, ethnically biased, susceptible to racially
discriminatory use and enables mass surveillance. Whilst these are valid
concerns, the Metropolitan Police’s experience over the past decade and the
debate it sparked illustrates that the debate over governing the technology
often doesn’t fairly weigh human rights and public safety concerns. Based on
the experiences from the use of LFR technology in UK policing, in this post, I
cover issues that often don’t surface wider-public discourse, some of these
issues being crucial in providing insights into how LFR technology can deployed
in the EU under the AI Act as well as other jurisdictions.
From backlash to acceptance
Critics often describe policing facial recognition as Orwellian surveillance tool. Yet history shows facial recognition is not
the first or only technology to raise such a fear.
When Transport for London released a poster in 2002 announcing CCTV on buses, the design featured a double-decker
bus gliding under a sky, with floating eyes. Its slogan read— “Secure
Beneath the Watchful Eyes.” Simon Davies, the then head of Privacy
International described it as “acutely disturbing.” Two decades later, CCTV
is widely accepted as an essential tool for solving
crimes.
Big Brother
Watch, initially opposed airport facial recognition e-gates, warning that
the system creates privacy intrusive
massive database of personal information and is prone to risk of error. Today,
automated border control in Europe is considered a privilege, allowing faster
passport control, available primarily to European passport holders. ‘Other
travellers’ undergo more intrusive security control, including through fingerprints.
New technologies usually caused alarm, until their public benefits
become clearer and they gain legitimacy. I don’t believe policing facial
recognition is any different.
Measuring
the impact of ethnic bias is tricky
Concerns about bias in facial recognition stem
from early studies of commercial gender-classification
algorithms and Metropolitan Police’s initial deployments that showed poorer accuracy especially for black women.
However, a 2023 audit by the National Physical Laboratory (NPL), commissioned by the Metropolitan police found that when the system
is optimally set, it works without significant ethnic disparities.
A crucial factor is the ‘recognition confidence
threshold,’ or ‘face match threshold’ which determines how accurately the
software matches faces. It ranges between 0-1. Higher settings reduce errors
but yield fewer face matches while lower settings give more matches with less
accuracy. The Metropolitan Police currently uses 0.64, a level recommended by the NPL to reduce ethnic bias significant
enough to treat is as not concerning(statistically insignificant).
The NPL’s test involved 400
volunteers embedded in an estimated crowd of 130,000. The test showed that at a 0.64
setting or higher, there was no ethnic disparity in accuracy. At thresholds of
0.62 and 0.60, ethnic bias was statistically insignificant, while at 0.58 and
0.56, the system struggled to identify black faces.
Pete Fussey, a
recognised expert in this field, contends the sample was too small to
support such a conclusion and notes that “false
matches were not actually assessed at the settings where ethnic bias was
non-existent”. This essentially rests on the fact that for a
technology that scans millions of faces, testing it on faces of 400 volunteers
is less likely to generate a sufficient evidence base. In their book, Facial Recognition Surveillance: Policing in the Age of
Artificial Intelligence (p, 58), Pete Fussey
and Daragh Murray argue:
“Also of note are claims that no demographic bias
is discernible above the 0.64 threshold. This is because no false positives
occurred at this level. Put another way, no bias was observed because the
system was not adequately tested in this range. Notable here is that such
arguments rest less on how FRT operates and more on how statistics work A suitable analogy would be the claim that 90
per cent of car accidents occur within a quarter-mile of home. This is less
because such locales are inherently hazardous and more because almost all car
journeys happen within a quarter-mile of home. Fewer journeys occur 600 miles
away so accidents in that category are rarer. ”
However, a counter-argument to above is that the
test in question did show steady decline in ethnic disparities with higher face match thresholds— at 0.56, 22 vs. 3
(Black vs. White); at 0.58, 11 vs. 0; at 0.60, 4 vs. 0; at 0.64, 0 vs. 0. Despite the sample being smaller, the
consistent decline implies that face match threshold clearly determines
accuracy. The insistence on testing the technology until bias is completely
removed is also unrealistic. So, if no inaccuracy was recorded at 0.64 and
ethnic bias declined gradually up to that point, it would not be unreasonable
to conclude that the technology works optimally at the given setting.
The NPL’s test is consistent with the risk management
system in the EU AI Act, which sets strict standards for high-risk AI systems. In its provisions requiring risk management for high-risk AI systems, in particular article 9(3), the AI Act requires that
“The risk management measures referred to in paragraph 2, point (d),
shall be such that the relevant residual risk associated with each hazard, as
well as the overall residual risk of the high-risk AI systems is judged to be
acceptable.”
It means that the expectation in terms of risks
including risk of ethic bias is not a complete elimination rather it is
mitigation to the extent that some acceptable(tolerable) level of risks could
still exist. By these standard, NPL’s testing is likely considered robust,
since at 0.64 ethnic bias would reasonably be seen as low enough to be
acceptable in view of the technology’s benefits
Subsequent Metropolitan Police’s deployment data
is also indicative of this. Between
January and August 2025, the Metropolitan Police have misidentified only eight people
using LFR, leading to no arrests. While ethnic breakdown
for these false matches is not studied, the small number makes any ethnic
disparity likely negligible.
Currently, there is one pending legal action
brought against the Metropolitan Police by Big
Brother Watch concerning prolonged police engagement with a
mistakenly identified individual. This was not officially documented as false
arrest, and therefore the official record in the UK is that there has not been
a single false arrest following misidentification by LFR in the UK.
The above highlights that statistics alone
doesn’t capture the complex ways LFR really affect people. Human oversight,
responsible police judgment, and procedural safeguards play a crucial role; and
the current debate discounts these components.
Policing by
consent isn’t policing by of everyone’s consent
A common misconception is that overt(transparent)
LFR surveillance undermines policing by consent, as people don’t meaningfully
consent to being surveilled.
Peter Fussey and Daragh Murray argue that, for instances, signages placed by the Metropolitan Police
at deployment spots to inform the public of LFR operations were insufficient to
obtain informed consent, as they contained inadequate information, lacked
visibility and offered no opportunity for refusing consent.
Echoing this, former director of Big Brother Watch, Silkie Carlo stated in an interview, “there’s no meaningful consent process
whatsoever. You certainly can’t withdraw consent.”
I think this view misrepresents both the law and
the idea of policing by consent. The relevant UK Surveillance Camera Code of
practice requires overt surveillance to be based on consent, specifically
clarifying that consent in this context
should be regarded as “analogous to policing by consent”.
Policing by consent is traced to the 9 point principles of Robert Peel, UK’s Home Secretary
set out in the general instructions issued to
new officers in 1829. Essentially, it requires public consent for police to serve the community where the legitimacy of policing power drives from public support. It
does not require individual member of the public to consent to specific
policing operations.
Similarly, surveillance by consent requires the
community broadly to agree to visible camera systems as a legitimate tool for
public safety, not whether everyone agrees to the surveillance. Besides
facilitating legitimacy, transparent police surveillance ensures that those
aggrieved by potentially unlawful surveillance can take legal actions. The Surveillance Camera Code
of Practice itself which is the basis for transparency in overt surveillance
confirms this point by not only specifying that consent in this context is
equivalent to policing by consent but also indicating the reason why consent is
required. Section 3.3.2. states
that “Surveillance by consent is dependent upon transparency and
accountability on the part of a system operator. The provision of
information is the first step in transparency and is also a key mechanism of
accountability.” Nowhere in the code or any other legislation is it stated that
surveillance by consent entitles individuals to consent to or withdraw consent
to specific operations on individual level. Despite quoting the SCC including
the relevant reference to policing by consent in their recent book, Peter
Fussey and Daragh Murry don’t engage with the notion of policing by consent when
they discuss consent in the context of overt surveillance, instead engaging
with data protection law notion of consent. If consent of everyone who could be
captured by LFR camera or even a normal CCTV came is to be secured, most public
facing CCTV cameras would have to be removed.
It is therefore legally and conceptually
unfounded to claim that overt LFR surveillance requires the consent of everyone
who walks by the LFR camera. Neither can this be realistically achieved in
practice.
Surveillance
harms, but context matters
Opponents often alert that surveillance in public
space, can deter people from speaking freely, attending protests, or joining
public events, a phenomenon called the ‘chilling effect.’
In the context of LFR, Daragh
Murray asserted
that it might discourage attendance at the 2025 Notting Hill Carnival, citing
uncertainty about how the technology is used and historical allegations of
institutional racism against the Metropolitan Police.
The 2024 Carnival experienced two murders, multiple assaults, and stabbings, and yet an estimated two million people attended the Carnival this year, undeterred by the potential violence.
Suggesting that surveillance would deter participation in such a cultural event
is clearly implausible. At the very
least, there is no evidence to back this claim.
The chilling effect of surveillance is a concern
in the context of political protests, where authorities may target opposition
groups and threaten civil liberties. It can also be argued that excessive
policing of minority communities may create a chilling effect to some extent,
though this is highly context dependent. For example, the 2025 Carnival had 7,000 police officers with
supporting technologies, and their presence was requested by the organisers and
generally welcomed by the public. To suggest that adding LFR to this setting
would have altered the behaviour of potential attendees is hardly credible. The
blanket claim that surveillance suppresses civil rights and alters behaviours in all contexts is not
supported by evidence.
The
bottom-line
Facial recognition will inevitably become routine
policing tool. Rather than pushing unrealistic proposals of bans or
moratoriums, regulatory debate should properly weigh the trade-offs between
human rights and public safety in ensuring the proportionate use of the
technology. Questions about when LFR should be used and
considered proportionate and other issues such as oversight should be debated
carefully. However, the UK police’s use LFR, and the ongoing debate highlights
that policy and regulatory proposals could be based on shaky interpretation of
data and understanding of essential legal concepts.
No comments:
Post a Comment