Tackling AI in the Time of COVID

By Jonathan McCully, 9th October 2020

How can we protect our digital rights amidst tech-solutionist approaches to combat the COVID-19 pandemic?

30 participants in DFF’s “AI in times of COVID” workshop examined this question from different angles over the past 3 days, joining for an online meeting from around the world.

The workshop built on the “Litigating Algorithms in Europe” workshop DFF and the AI Now Institute organised in November 2019. One of the desired follow-ups from that gathering was an international meeting on combatting the impact of AI on human rights through strategic litigation.

As we all know, the world has changed quite a bit since, which changed not only the format of the meeting, but also the scope.

As we wrote earlier this year, the COVID-19 pandemic also created a crisis for digital rights. However, the transborder nature of the pandemic also created an opportunity: if we are seeing similar measures –– COVID apps, health trackers and other tech solutions –– being rolled out around the world, how can we make sure we push for standards that protect our human rights? Can we leverage positive results in one jurisdiction in another? And: what best practices and tactics from the pre-COVID world can we draw upon to successfully fight these battles?

…what best practices and tactics from the pre-COVID world can we draw upon to successfully fight these battles?

Trends in Tech-solutionism

We started the first day of the workshop by listening and learning from participants across the globe who have been monitoring the “tech-solutionist” measures that have been rolled out during the pandemic both in and outside of Europe. They explained the issues they had been spotting and which litigation actions they were considering taking in response.

Reflecting on these sessions at the outset of day 2, participants noted that “tech inevitability” was a common thread across different jurisdictions and asked the question how we could counter the narrative of governments framing technology as the main way to solve the health crisis.

Pushing for Transparency
 

One of the key issues in challenging automated systems being used is knowing that they are being used in the first place. There often is a lack of transparency of what type of algorithmic decision-making is being deployed, as well as when and how.

One way to push for greater transparency is by using freedom of information requests. After hearing experiences from different participants in using this tool, we co-created a checklist of what litigators could be asking for.

One way to push for greater transparency is by using freedom of information requests.

After that (as well as a joint coffee break to top up the levels of caffeine), participants looked at what could be learned from successful cases in a non-COVID context. Lawyers who had worked on the Dutch SyRI case, the challenge to police use of facial recognition in the UK, and lawyers who had taken on a Medicaid algorithm that reduced care to disabled patients and the COMPAS risk assessment system in the US shared their stories, tactics, and lessons learned.

Best Practice Brainstorming

Following a round of participant-proposed best practice brainstorming sessions on day 3, which looked at best practices for communicating the impact of COVID technology to sceptical judges, strategies to safeguard against repurposing COVID tech after the pandemic, how to solve tech inequalities exacerbated by the pandemic, and tackling machine and human bias in automated systems, we revisited the potential cases we started the workshop with on day 1 to see how the strategies and tactics discussed between that first and this last session could be leveraged.

…we looked at best practices for communicating the impact of COVID technology to sceptical judges, strategies to safeguard against repurposing COVID tech after the pandemic, how to solve tech inequalities exacerbated by the pandemic, and tackling machine and human bias in automated systems

It is unfortunate that the group were not able to meet in person during these exceptionally challenging times, but it was truly uplifting and inspiring to virtually connect, discuss and brainstorm with an amazing group working hard to safeguard our digital rights during the current health crisis. There was plenty of food for thought, and ideas shared for further collaboration and information sharing, and we look forward to seeing this invaluable work progress.

Graphic Recordings by Blanche Illustrates and TEMJAM

Join Our Virtual Litigation Retreat!

By Jonathan McCully, 24th September 2020

We are very excited to be bringing back DFF’s litigation retreat in an online format between 12 and 18 November 2020 (excluding the weekend), and we welcome applications from litigators across Europe working on digital rights issues to join us.

In 2018, we ran litigation retreats in Montenegro and Belgrade. You can watch a video of our first retreat here. These were great opportunities for digital rights litigators to spend time together, sharpen their litigation skills, and share knowledge, experiences and tactics that others could use in their own litigation.

During these retreats, a variety of cases were workshopped from challenges to police use of facial recognition technology to litigation against Facebook’s censorship of user content. 

At our retreats, we aim to create a co-learning environment where participants can concentrate their time, energy and attention to further developing and working through challenges in relation to an ongoing piece of digital rights litigation.

Through creating this space, and holding focused discussions, we hope that participants can come away from these retreats with an enhanced litigation strategy and plan for their cases. As a participant from a previous retreat said: “it is a unique opportunity to stop and reflect on what can be done better when you’re trying to achieve change using strategic litigation.”

…“it is a unique opportunity to stop and reflect on what can be done better when you’re trying to achieve change using strategic litigation”

We have remodeled our previous retreats in order to hold this one virtually. It will include strategic litigation training and workshopping components in a collaborative online environment. The four-day programme will comprise sessions on building skills for developing a litigation strategy, case planning, and advocacy, and will include dedicated sessions on thematic areas relevant to litigating digital rights cases in Europe.

Built into this agenda will be dedicated group work, where participants will work on strategic litigation plans for case studies that they have brought to the virtual retreat. We will also have some time to get to know each other with some informal online social activities.

The four-day programme will comprise sessions on building skills for developing a litigation strategy, case planning, and advocacy, and will include dedicated sessions on thematic areas

As with past retreats, the aim of the four days is to give participants time to do deep work on their case studies. Therefore, even though the online sessions across the four days will be short and intensive, we encourage those attending to use the rest of the time across these four days to disconnect from other ongoing projects and think through aspects of their cases in light of conversations they will have with the group. Therefore, we would recommend that all those who attend treat the four days as a real retreat, away from other work pressures and external realities. 

If you are interested in joining us, please get in touch with us and we will send you a short application form to fill out. The deadline for applications is 16 October 2020.

The main thing we ask for is that all participants come to the retreat with a digital rights case study – whether existing or hypothetical – that they would be interested in litigating. Ideally, the case study would fall within DFF’s thematic focus areas, which you can read more about here, but we are open to other strategic case ideas as well. 

We hope you will be able to join us!

Taking Police Tech to Court

By Jonathan McCully, 22nd July 2020

Police officers in riot gear look out across an intersection.

Last month saw some significant milestones in the fight against digital and data-driven tools used in policing.

At the end of June, a bill was introduced to the US Congress calling for a moratorium on facial recognition and biometric technology, while Santa Cruz and Boston joined the growing list of US cities to ban police and public authority use of facial recognition. Santa Cruz went even further, as the first US city to outright ban the use of predictive policing technology.

Last month, the Association for Computing Machinery (ACM) also added their voice, calling on lawmakers to suspend the use of facial recognition that can be prejudicial to “human and legal rights.”

In Europe, the European Data Protection Supervisor called on a suspension of automated recognition technologies, including those that measure “gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioral signals.”

In the UK, hearings commenced in Liberty’s appeal in their legal challenge to the use of facial recognition technology by South Wales Police. 

These actions follow the move by IBM, Microsoft, and Amazon to restrict, to varying degrees, their facial recognition businesses. They also come off the back of years of tireless campaigning and research on facial recognition by activists and human rights organisations, including the groundbreaking work of Algorithm Justice League’s Joy Buolamwini, Timnit Gebru, and Inioluwa Deborah Raji.

This includes ending the use of technologies that embed, entrench and exacerbate police discrimination and violate the rights of individuals

These milestones have come at an incredibly significant time, with police brutality, abuse, and systemic racism dominating public discussion and protests across the globe.

The Black Lives Matter message has been resolute and urgent: all societies need to consider reforming, defunding, dismantling or abolishing their law enforcement institutions in their current form to achieve justice and foster healing. This includes ending the use of technologies that embed, entrench and exacerbate police discrimination and violate the rights of individuals against whom these technologies are targeted and used.

Taking Police Tech to Court

Strategic litigation can be an empowering tool to challenge police measures, policy and law, and it is a crucial mechanism by which law enforcement can be held to account for their actions.

The European Court of Human Rights (ECtHR), for example, has a substantial body of case law dealing with issues such as the failure to investigate racist or discriminatory police conduct, excessive use of force by police, interrogation techniques, and the policing of public assemblies.

More cases are also coming before the courts and regulatory bodies challenging the rise of Big Data policing. Even over the last year and a half, a number of significant decisions have been handed down in Europe that set important precedent around police use of new technologies and data-driven techniques.

There is much work still to be done, particularly in pushing for decisions that take a more intersectional approach to assessing these measures, but these cases offer some hope around the potential for strategic litigation to set necessary limits on police tech.

A group of police officers in riot gear make their way across a street.

Law Enforcement: Not Above the Law

Courts and independent oversight bodies perform an essential role in enforcing existing laws against the police. In some areas, strong regulations that restrict police use of data-driven techniques already exist.

One prominent example is the EU Police Directive (Directive (EU) 2016/680), which establishes data protection principles and standards for authorities responsible for preventing, investigating, detecting and prosecuting crimes. It has already been applied by courts to find certain police data sharing practices unlawful.

Earlier this year, the UK Supreme Court delivered its first judgment applying the national law implementing the Police Directive in a case concerning international law enforcement information sharing.

The case, Elgizouli v. Secretary of State for the Home Department, concerned an individual who had been accused of being a member of a terrorist group in Syria. This group had allegedly been involved in the murder of US and British soldiers.

While investigating this group, the US had sought, through a “mutual legal assistance” request, evidence related to the individual that had been previously obtained by British authorities.

Without obtaining assurances from the US that this evidence would not be used in a prosecution that could lead to the imposition of the death penalty, the Home Secretary agreed to provide the information. One of the arguments raised in the case was whether this international transfer of data violated the Data Protection Act as it applied to law enforcement authorities. The UK Supreme Court found it did, reasoning that the Home Secretary had failed to address the specific requirements of the Data Protection Act. Instead, the information was transferred based on “political expediency” rather than consideration of what the legislation permitted or required of law enforcement.

Police services have also been held to account by Data Protection Authorities for their failure to comply with data protection law

Police services have also been held to account by Data Protection Authorities for their failure to comply with data protection law.

Last year, the UK’s Information Commissioner’s Office (ICO) issued an enforcement notice against the Metropolitan Police Service for their failure to respect the right of individuals to access information held about them by the police force. More specifically, the Metropolitan Police Service had been asked to address the significant backlog they had in subject access requests. While in Germany, fines have been imposed against individual police officers for abusing their position by using data from their work to make private contact with individuals.

Procedural Safeguards

Technological developments have diversified the intrusive means by which the police can observe, surveil, and search.

In the US, groundbreaking cases have been brought before the Supreme Court clarifying that the definition of “search” under the two-century-old Fourth Amendment extends to thermal scanning, the tracking of an electronic beeper, and accessing historical cellphone location records.

In doing so, these cases have affirmed that a search warrant is required before these measures can be adopted – a procedural safeguard that can help protect individuals against abusive or arbitrary uses of these technologies.

Similar cases can be seen in European courts. A recent example was brought only a few months ago by La Quadrature du Net and the Human Rights League in France. The case concerned the use of drones by the Parisian police to enforce the country’s COVID-19 lockdown measures.

These drones were fitted with zoom enabled cameras and loudspeakers

These drones were fitted with zoom enabled cameras and loudspeakers. According to the police, however, they did not intend to use the zoom function, nor would they use the drones to identify specific individuals. Instead, the cameras would be flown at 80 to 100 metres altitude, use wide-angle lenses, and would not contain memory cards. The purpose of the drones was to identify “public gatherings” by transmitting images in real time to a command centre, from where they could deploy police officers to disperse the gatherings.

The Conseil d’État found that the use of these drones engaged data protection law. The drone operators had the ability to use the optical zoom and could fly the drone lower than the designated height, meaning that the drones were likely to be used to process identifying data of individuals.

The court agreed that the purpose behind the use of the drones, namely the protection of the public, was legitimate. However, according to French law, the use of these drones required prior ministerial authorisation taken after a “reasoned and published opinion” of the National Commission for Data Protection (CNIL).

Alternatively, in the absence of such authorisation, the drones would need to be fitted with technical measures that would make it impossible for police to identify individuals when using them. In short, the police could no longer continue to use the drones without first obtaining official authorisation.

A group of police officers in riot gear standing on a street

Databases and Watchlists

Police collection and retention of personal data, including in databases and watchlists, continues to be an issue that is frequently litigated before European courts.

In the last eighteen months, the ECtHR has handed down a number of important judgments that found violations of the right to privacy due to police data retention practices.

In Catt v. the United Kingdom, the ECtHR criticised the UK police’s retention of a peace campaigner’s data in an “extremism database.”

The data included information such as his name, address, date of birth and presence at demonstrations, including political and trade union events that revealed his political opinions.

The ECtHR found the retention unnecessary. More particularly, it found fault with the fact that the data could have been retained indefinitely and the retention lacked adequate safeguards.

Furthermore, the authorities failed to comply with their own definition of “extremism” by retaining the campaigner’s data in the database, and they failed to take into account the heightened protection that should be given to political opinions.

Earlier this year, in Gaughran v. the United Kingdom, the ECtHR found a violation of the right to privacy where, following a conviction for a minor offence, an individual had his DNA profile, fingerprints and custody photograph retained indefinitely in a local database to be used by police. The ECtHR found this data retention regime to be disproportionate in light of, among other things, its indiscriminate nature and the lack of any real possibility for the data retention to be reviewed.

Following a conviction for a minor offence, an individual had his DNA profile, fingerprints and custody photograph retained indefinitely

This is a significant judgment from the ECtHR, as it appeared to be the first time it had referred to facial recognition technology in its substantive assessment of a case.

Despite the fact that such technology was not used in relation to the applicant’s photograph, the ECtHR took into account the fact that the “police may also apply facial recognition and facial mapping techniques to the photograph” in finding an interference. This was a notable change of approach compared to the decisions that were handed down by the European Commission of Human Rights in the 70s, 80s and 90s, where it reasoned that retention of custody photographs would not amount to an interference with the right to privacy.

Also this year, in Trajkovski and Chipovski v. North Macedonia, the ECtHR condemned police retention of DNA profiles of convicted individuals. Applying its previous case law on police retention of DNA information, the ECtHR found the blanket and indiscriminate nature of the police powers to retain convicted individuals’ DNA profiles, coupled with the lack of sufficient safeguards, amounted to a violation of the right to privacy.

In another case from this year, the ECtHR recognised that police retention of palm prints draws the same human rights considerations as the police retention of fingerprints. The retention of data in police databases is an area of ECtHR jurisprudence that is likely to grow over the coming years, with 11 pending cases before the court dealing with the retention of criminal records.

Room for Improvement

These cases are promising indications of the role courts can play in reigning in Big Data policing, particularly where it concerns disproportionate intrusions upon the private lives of individuals.

These cases are promising indications of the role courts can play in reigning in Big Data policing

Nonetheless, there have been relatively few cases in Europe that take a truly intersectional approach to challenging police tech.

For instance, a vast majority of court decisions concerning police measures that surveil and intrude upon individual privacy do not properly consider how those measures can often be targeted at, have a disproportionate impact on, or profile certain individuals based on their ethnicity, nationality, religion, language, race or socio-economic status. In 2019, for instance, the High Court of England and Wales did not appear to fully and meaningfully engage with the discrimination arguments raised with regard to the South Wales Police’s deployment of facial recognition technology.

There is much more left to be said by the courts in relation to police tech. Now is the time for litigation to be taken and for the courts to properly take account of the entire spectrum of human rights concerns raised by Big Data policing, including those related to the discriminatory impact of new technologies.

Photos by ev on Unsplash