Testing a Framework for Setting GDPR Litigation Priorities

By Jonathan McCully, 18th October 2019

Data protection rights are engaged in nearly every aspect of our lives. From taking public transport, to registering to vote, exercising, and shopping. Under the recent General Data Protection Regulation (GDPR), there are significantly more opportunities for cases to be taken to vindicate data protection rights across these different domains.

However, with capacity and resources being limited, it can be difficult to know where to begin when it comes to enforcing this important piece of legislation. When trying to develop a strategy for GDPR litigation, a number of questions might arise: Where are the most pressing areas for GDPR work? What factors should we look at when setting litigation priorities? What metrics can we use, as a field, to pin-point shared litigation goals that we can collectively work towards?

At DFF’s meeting last month on unlocking the GDPR’s strategic litigation opportunities, participants discussed and critiqued a nascent framework that could help in answering some of these questions. The framework emerged from conversations DFF held leading up to the event with litigators seeking to leverage the GDPR in their digital rights litigation. It is intended to act as a tool to help establish shared GDPR litigation priorities.

Using a revised version of Maslow’s Hierarchy of Needs, the framework involves the application of a pyramid of “human needs” to the GDPR context to help establish what the most pressing issues might be when it comes to GDPR litigation. At the bottom of the pyramid are the higher priority needs of “social connection and social collaboration” (e.g. teamwork, the feeling of belonging, and family and community ties), “physiological needs” (e.g. access to food, water, shelter, and warmth) and “safety needs” (e.g. job security, adequate resources, health, and safe environments).

A Draft Framework for GDPR Litigation Priorities Based on Maslow’s Hierarchy of Needs

What do these needs have to do with the GDPR? Some of these fundamental needs can be served through the protection and promotion of data protection rights alongside other areas of human rights enforcement. For instance, some unlawful data processing practices may be hindering, deterring or even preventing individuals from accessing social security provisions and similar public services that serve a fundamental physiological need. While other unlawful data processing practices may contribute towards workplace surveillance, discrimination, and unfair hiring/dismissal practices that threaten job security, access to resources and the forging of social connections. Visualising where certain GDPR issues fall across this spectrum of human needs can be a useful way of establishing those issues that are of highest priority in our broader push for promoting and protecting the right to data protection.

During the meeting, participants reiterated that it would be useful to have a workable framework that could help establish shared goals and priorities for GDPR litigation. A number of participants observed that the draft framework was a useful way of linking GDPR work more concretely to other civil society and human rights movements. Others noted that it was an intuitive way of identifying harms or damage that might result from data protection violations and working from there, rather than a specific breach of a GDPR provision. They also noted that it was a useful exercise for taking a step back from specific GDPR non-compliance and thinking about what the real-world impact of our GDPR enforcement actions might be.

A number of constructive observations were also made about the framework and how it might be revised. For instance, using a hierarchy of needs in the human rights context gives rise to a number of concerns. Human rights should be protected equally and should not be viewed as a hierarchy of protections. Perhaps, therefore, the framework should be visualised as a circle of inter-connected human needs rather than a pyramid? The framework also fails to capture the systemic nature of some data protection harms, and how the widespread nature of some violations may render them more pressing and of higher priority. Finally, questions were raised about when and how it might be most useful to use a framework like this. Would it be most useful for mapping out organisational priorities for GDPR work? Or would it be better suited as a tool for identifying cases or communicating the impact of data protection issues to the general public?

At the event, a number of participants expressed an interest in discussing, critiquing and hacking this framework further to see if it can be developed into a practical tool for identifying and articulating strategic litigation goals. These conversations will be held in the coming weeks. We want to hear from others in the field who would like to join us in this exciting conversation, so please do get in touch if you are interested!

Transatlantic call series: machine learning and human rights

By Nani Jansen Reventlow, 15th October 2019

How can we minimise the negative impact of the use of algorithms and machine learning on our human rights? What cases are litigators in the US and Europe working on to take on this important issue?

Today’s fourth installment in DFF’s transatlantic call series addressed machine learning and human rights. EFF kicked off the conversation by telling participants about their work on the use of machine learning in various aspects of the criminal justice system. This includes the use of algorithms for determining risk of reoffending as well as determining the guilt of an alleged offender. EFF spoke about an ongoing case (California v. Johnson) in which they filed an amicus brief, arguing that criminal defendants should be able to scrutinise algorithms that have been used by prosecutors to secure a conviction.

A common thread in EFF’s work in this area is the need to ensure that government use of algorithms in decision-making processes is conducted in as fair and transparent a manner as possible. This is similar to the approach taken by PILP, who are challenging the use by government agencies in the Netherlands of “risk profiling” algorithms. These profiles are used to detect the likelihood of individuals committing fraud. The system, called “SyRI“, involves the pooling of citizens’ data that has been collected by the state in a variety of different contexts, after which algorithms calculate whether certain citizens pose a ‘risk’ of committing abuse, non-compliance or even fraud in the context of social security, tax payments, and labour law.

PILP shared how the SyRI system has a disproportionate impact on the poorest parts of the population. The UN Special Rapporteur on extreme poverty and human rights, Philip Alston, has submitted an amicus brief in the case, saying that the SyRI system poses “significant potential threats to human rights, in particular for the poorest in society”.

Following a further exchange on these cases and other work being done in this area, DFF also shared details of its ongoing projects on Artificial Intelligence and human rights. In November, DFF is organising a workshop together with the AI Now Institute to explore litigation opportunities to limit the negative impact on human rights posed by AI. Also, DFF’s Legal Adviser Jonathan McCully is working together with a technologist in the context of a Mozilla Fellowship to create two resources — one for lawyers and another for technologists, data scientists, and digital rights activists — that provide tools for working together effectively when taking cases against human rights violations caused by AI.

Our next transatlantic call will take place on 13 November and will focus on content moderation. It is not too late to join: get in touch to register your attendance!

Transatlantic call series: uniting digital rights litigators to challenge anti-encryption measures

By Jason Williams-Quarry, 11th October 2019

This week, we held our third transatlantic call, hosting a conversation between digital rights litigators in Europe and the US. Following a conversation about potential areas for transatlantic collaboration last week, this week’s call focused on encryption.

The American Civil Liberties Union (ACLU) and Electronic Frontier Foundation (EFF) told participants about their work challenging anti-encryption measures. These organisations frequently collaborate to challenge anti-encryption, seeking protection of the rights of individuals (and their devices) under the Fifth Amendment. This protects people under investigation from being forced to open their devices to police or courts, as doing so may violate their right against self-incrimination.

EFF also shared experiences of their work in the Apple v. FBI case that concerned a judgement requiring Apple to engineer a ‘backdoor’ into its iPhone software operating system. This would create a security flaw leaving iPhone users vulnerable to hack. In this case, EFF filed an amicus brief to the court arguing that Government demands were violating Apple’s First Amendment rights. An interesting discussion point came from reflections that, when it comes to challenging anti-encryption measures, sometimes maintaining the status quo can be considered a win. Under this theory, the longer that anti-encryption measures can be held back, the more encryption will become the ‘norm’.

We also heard how US and European organisations have collaborated in the past, commonly through interventions by US organisations in European cases. EFF and the ACLU called for more European organisations to reach out about anti-encryption cases, so that cross-Atlantic collaboration can continue to grow in the future. Other topics touched upon during the conversation were attempts from the police in the UK to gain access to the phones of sexual abuse victims and ongoing work in Russia, where the Government are trying to force Telegram to provide access to users’ keys

At the end of the session, participants shared lessons learnt and made plans for follow-up and collaboration. This further built on some of the work done during the preceding transatlantic call, where participants focused on ‘heat mapping thematic areas for transnational collaboration’. This demonstrated an interest in exploring joint work on, amongst others, the privacy shield, facial recognition and the datafication of migrant and refugees’ data.

What we continue to appreciate from this transatlantic call series, is how the participant-led discussions during each call build the foundation for the next. We look forward to seeing this process continue and are exploring ways in which we can organise an in-person meeting to do more in depth work on issues of mutual interest.

Our next and penultimate call in the series will take place on Tuesday 15 October, focusing on ‘machine learning and human rights’.  If you have not yet registered to join this call and you would like to, please let us know, so we can add you to the list!