The Practical Challenges of Representing Individuals under the GDPR

By Lori Roussey, 2nd December 2019

The General Data Protection Regulation (GDPR) brought unprecedented opportunities for civil society’s strategic litigators. Yet, as we discussed during the DFF workshop on unlocking the litigation opportunities of the GDPR, how to infuse it in our litigation practices remains uncharted territory. This is particularly true when it comes to how we can best represent individuals before the courts.

First and foremost, the law. Article 80 of the GDPR is twofold. Article 80(1) makes it possible for a not-for-profit to be mandated by a data subject to lodge a complaint with a supervisory authority, or to exercise the right to a judicial remedy on their behalf. It may even enable the not-for-profit to seek compensation on behalf of the data subject, depending on Member State law. Article 80(2) leaves a discretionary power to Member States to give certain organisations the mandate to lodge complaints and seek judicial remedies independently of individuals when they observe that those individuals’ rights have been infringed. To avoid confusion with North American law, none of these paragraphs are setting up a collective compensation mechanism.

The mandate of Article 80(1) seems to hold the promise of the ability to empower and support data subjects to bring more claims through representative organisations. Yet, in practice, building and maintaining a collaboration for several years with an individual requires a wealth of resources that not-for-profits, particularly NGOs, rarely have. More importantly, a mandate to represent vulnerable data subjects (such as children, elderly people, people with a diminishing condition) would still require numerous legal safeguards to be valid.

Furthermore, members of the public usually engage when they have an issue with a service, not because of long term stakes or because they view their own circumstances as a “test case”. Furthermore, if an individual’s specific case is resolved or they become uneasy facing off against a powerful multinational company, they might decide to step away from it or the cause. Due to these factors, employees of not-for-profits tend to put themselves forward as data subjects in strategic cases, but it can put a considerable strain on their relationship with their employer.

The ramifications of the hurdles touched upon above may lead one to conclude that Article 80(2) is the best route for strategic litigation. Yet, participants at DFF’s meeting observed that given its discretionary terms, implementation is fragmented throughout the European Union. This has stripped entire nations from a robust opportunity for securing data subjects’ redress.

Nonetheless, the GDPR does not exist in a vacuum. On 11 April 2018, the European Commission (EC) published its New Deal for Consumers proposal package, comprising a proposal for a directive on representative actions for the protection of the collective interests of consumers. Interestingly, amendments by the Parliament on 26 March 2019 recommend that the Directive should apply when more than two data subjects’ rights are infringed, as long as they may be qualified as consumers. The Council is now analysing the proposal and next year the trilogue will take place. Civil society will have to make sure the GDPR remains expressly provided for in the text, so that we can ally with consumer organisations or request that independent public bodies bring representative actions to protect digital rights. NGOs may even want to push for the broadening of the criteria for who can bring such claim, as data subjects acting as consumers may be better off if more public interest organisations can bring a claim to defend their rights.

In the meantime, the lesson learned from GDPR litigation so far is it is necessary that, in its formal two-year report of May 2020 on the implementation of the GDPR, the European Commission stresses that the current implementation of Article 80 requires improved harmonisation to meaningfully foster and defend data subjects’ rights across jurisdictions.

About the author: Lori Roussey is a lawyer specialised in European data protection law in the context of intelligence and humanitarian data processing activities.

Litigating algorithms: taking the conversation from North America to Europe and beyond

By Nani Jansen Reventlow, 18th November 2019

Who is doing litigation work on the human rights impact of algorithmic decision making? What lessons can be learned from these cases and are there any best practices we can distill from them? What are the cases we would like to see brought in 2020?

These questions drove the agenda of a two-day meeting with litigators from the US, Europe and Latin America, organised by the Digital Freedom Fund in partnership with the AI Now Institute, and kindly hosted by Mozilla at their Berlin office. The meeting brought together litigators with experience in challenging algorithmic decision making through the courts, litigators with an interest in undertaking this work, as well as participants with a background in policy work in the field of AI.

The agenda was designed by taking AI Now’s 2018 and 2019 “Litigating Algorithms” meetings as a starting point. These meetings facilitated knowledge sharing between litigators, academics and technologists on how to meet the challenges involved in litigating against automated systems that are applied across a variety of different contexts, from health and employment to the criminal justice system. The agenda for the meeting in Berlin combined this approach with the collaborative working methods that are central to all DFF litigation meetings. This approach allowed participants to not only get to know each other’s work, but also zoom in on transferable lessons learned that can help build stronger cases going forward. While those present worked on three different continents, within distinct national legal frameworks, and on very different cases –– ranging from a challenge to a misleading display of search results on Google to defending individuals excluded from home care due to a new algorithmic assessment by Medicaid –– there was a shared sense that there were more similarities than differences between experiences. This applied both when looking at lessons learned and best practices, and when considering the obstacles for bringing further cases.

Dedicated time was spent on critical reflection on ongoing litigation, leading to a number of insights into how those cases could be further strengthened. A “needs-offers” exercise resulted in a rich array of knowledge and information participants were willing to share with others in the room, as well as a frank listing of items with which they could use further support.

As discussed in our recent blog post, the gathering was organised as part of DFF’s efforts to lower the threshold for strategic litigation on AI and human rights, and fits into a broader framework of activities in this area. A joint AI Now – DFF publication, mapping the cases discussed at the workshop, is forthcoming in the Spring. Looking further ahead, DFF is working on organising an international meeting on litigation on AI and human rights to facilitate knowledge sharing and the brainstorming of new opportunities across the globe on this crucial digital rights issue. We will be consulting with our network on how to best shape this event –– if you have any thoughts or suggestions you would like to share with us, please do get in touch!

Lowering the threshold for strategic litigation on AI and human rights

By Nani Jansen Reventlow, 8th November 2019

DFF commenced its operations in 2017 by starting a strategy process. This process, which is ongoing to this day, consisted of a consultation of all key stakeholders working on digital rights in Europe, asking them what their priorities were and how DFF could best support them. Following DFF’s first strategy meeting in February 2018, this process led to the formulation of three thematic focus areas for DFF’s work. First, advancing individuals’ ability to exercise their right to privacy; second, protecting and promoting the free flow of information online; and third, ensuring accountability, transparency and the adherence to human rights standards in the use and design of technology.

The debate about “AI” – also framed as one on machine learning or automated decision making – has become a hot topic for discussion over the recent period. As foreshadowed by developments in 2018, AI and human rights was a much-debated topic at our 2019 strategy meeting, which brought together 48 organisations from across Europe working on digital rights.

As had been the case during the Virtual Strategy Design Jam we hosted on the use of AI in law enforcement in the runup to the strategy meeting, the topic was actively debated, but we did not see a corresponding uptake of the issue when it came to litigation. In other words: there was a clear sense of urgency to address the potential negative impact the use of AI could have on human rights, and an interest in pursuing litigation to address this, but not many cases were brought. Following the discussions at our strategy meeting closely and listening to other input from the field, it became clear that many litigators had difficulty identifying the issues on which to litigate and suitable entry points to do so.

The development and use of technology in all aspects of our lives meanwhile is increasing, making the need to confront and challenge any negative human rights impacts that result from it ever more urgent. Strategic litigation can be an important instrument in this fight. In light of this, DFF is seeking to help lower the threshold for litigators to step into this space and help safeguard our human rights when AI is at play.

This November, DFF is hosting a litigators’ meeting together with the AI Now Institute, building on their “Litigating Algorithms” series (see here and here for the meeting reports) which brought together litigators, academics and technologists to share experiences in litigation on the impact of the use of AI across a variety of different contexts. The meeting, which will be hosted at Mozilla’s Berlin office, will bring together US and European litigators with experience in challenging algorithmic decision making through the courts as well as those with an interest in doing so. Besides sharing best practices, participants will brainstorm new case ideas and identify concrete plans for next steps.

In October, DFF’s Legal Adviser joined forces with technologist Aurum Linh, a Mozilla Fellow, to work on a set of guides to help build stronger litigation on AI and human rights that can help set precedents that ensure greater transparency, accountability and adherence to human rights standards in the design and use of AI. The first guide will be aimed at demystifying litigation for digital rights activists, technologists and data scientists, who will often be at the forefront of identifying the situations and fact patterns that are ripe for AI-related human rights challenges through litigation. The second guide will be aimed at lawyers working across different practice areas – such as criminal, employment, immigration or election law – who could have clients whose rights have been violated by the development and use of AI. The guide will provide these legal practitioners with the minimum viable information necessary to effectively identify and pursue legal claims challenging human rights violations caused by AI. The guides will be developed through regular consultation with the intended audiences and organisations already looking at litigating on AI to ensure the resources meet their needs. Watch this space for updates and learn how you can join the conversation.

Both strands of activities will build on each other and weave into DFF’s ongoing dialogue with the field. Following the November “European litigating algorithms” meeting, a report will be published in early Spring to share lessons learned with the field. In February 2020, a dedicated consultation will be held to test the concepts of the litigation guides. This all will feed into the publication of the guides in the second half of 2020 and an international meeting for litigators to share experiences on litigating on this topic across different regions.

… and, we hope, many exciting cases! We look forward to supporting some of the exciting work that will be developed over the coming months and are always happy to hear from you and discuss your ideas.