A Landmark Victory Against Police Use of Facial Recognition

By Megan Goulding, 13th August 2021

The first legal challenge to police deployment of live facial recognition prevails on appeal, though broader consequences for the future of biometric surveillance remain uncertain.
Earlier this year, we explored a range of digital rights topics in video format as part of the Digital Freedom Fund’s Digital Rights Around the World Series. We are now revisiting some of those topics with updates from discussions that happened during our 2021 Strategy Meeting and additional developments since then.

Today, police forces and private companies across the globe employ facial recognition technology to surveil, target, and exploit one of our most defining features–our faces. After extracting a biometric template of a person’s face from an image, these tools proceed to seek out a “match” by comparing the data against facial images on a “watchlist.” 

Liberty, a civil liberties and human rights advocacy organisation, has been fighting for a ban on facial recognition technology. Liberty believes this technology is discriminatory, intrusive, and oppressive, deployed repeatedly by police as yet another tool to target overpoliced and over-surveilled people such as communities of colour and low-income communities.

Not only does automated facial recognition (AFR) or live facial recognition (LFR) software collect sensitive biometric data from individuals without their knowledge or consent, but the software also frequently produces false positives due to proven racial disparities in recognition accuracy. Such inaccuracies are troubling in any setting, but present a truly unacceptable danger in a law enforcement context. While held up as being lacking in human bias or error, automated technologies are neither created nor operated in a vacuum. Allowing the continued use of these technologies will likely reify and exacerbate the existing inequalities in the criminal justice system. 

…the software also frequently produces false positives due to proven racial disparities in recognition accuracy

In 2018, on behalf of civil rights campaigner Ed Bridges, Liberty filed the first-ever legal challenge to police use of this technology against South Wales Police (SWP). SWP’s software first scanned Bridges on a busy street in December 2017, and again when he attended a protest in March 2018. SWP tested AFR at large public events and as a part of everyday policing, using it on around 70 occasions to obtain sensitive biometric data from over 500,000 people since May 2017. 

Though the High Court initially ruled against this claim, it ultimately prevailed in the Court of Appeal, which made a number of key findings. Importantly, it recognised that the use of AFR technology infringes the right to privacy of all individuals who have their faces scanned. The Court also noted “fundamental deficiencies” in the laws regulating police use of the technology, concluding that there were not sufficient safeguards in place to protect people’s rights. Specifically, there were insufficient protections around where the software could be used and who could be put on a watchlist. Furthermore, they found that SWP had failed to adequately investigate and prevent racial or gender bias in the software. 

…it recognised that the use of AFR technology infringes the right to privacy of all individuals who have their faces scanned

SWP did not challenge the Court’s decision and have since halted their long-running trial of the technology. While this constitutes a major victory, the full extent of its impact beyond SWP remains to be seen. The Court’s ruling makes it possible to challenge future uses of automated facial recognition at the national level but leaves open the possibility for UK police forces to employ AFR technology arguing that they have put in place better-defined safeguards than those put in place by SWP. Indeed, the Metropolitan Police say they continue to use LFR in the wake of the SWP decision (although there have been no known deployments since the decision), citing their use of the technology to deter and solve “serious crime” and efforts to inform the public about its use as differentiating features of their programme. 

In the UK, the Information Commissioner’s Office (ICO) is responsible for upholding information rights in the public interest. As of June 2021, the ICO has published an opinion setting out the “rules for engagement” on the use of live facial recognition in public spaces. It advises that parties intending to leverage live facial recognition must comply with provisions of the 2018 Data Protection Act and the UK General Data Protection Regulation, including data protection principles set out in Article 5 for lawfulness, fairness, transparency, purpose limitation, data minimisation, storage limitation, security and accountability.

Even so, this opinion does not support a total ban on public biometric surveillance. Like the European Commission in its pending draft for the Artificial Intelligence Act, the ICO still believes emergency situations could justify the immense risks entailed in deploying facial recognition. Meanwhile, the European Data Protection Board (EDPB), which oversees the consistent application of the GDPR throughout the EU, and the European Data Protection Supervisor (EDPS), who advises on personal data processing compliance by EU administrations, have released a joint statement calling for a general ban on real-time facial recognition in public spaces. 

…increasing use to monitor protests poses troubling implications for freedom of expression and freedom of assembly

Police forces, immigration agencies, public institutions, and private companies alike will no doubt continue trying to use this technology. As discussed during DFF’s annual strategy meeting this year, the COVID crisis has only intensified the expansion of facial recognition technology. For example, its increasing use to monitor protests poses troubling implications for freedom of expression and freedom of assembly. The introduction of facial recognition by gig work platforms like Uber to screen and surveil its workers has further marginalised those working in the gig economy. Ostensibly justified by a need to manage the pandemic, countless new usages may continue well after the pandemic. 

Though various proposals have been made to regulate facial recognition, attempts to improve or limit the use of such technologies cannot fully curb their discriminatory and invasive impact. Liberty remains committed to achieving a total ban on the use of this authoritarian and discriminatory surveillance tool.

Megan Goulding is a lawyer at Liberty.