A project to demystify litigation and artificial intelligence

By Jonathan McCully, 6th December 2019

We are living in an age of artificial intelligence. We may not see anthropomorphic robots roaming our streets, but smart machines are increasingly making choices that can have a significant impact on our lives and our rights. Autonomous systems have been built to decide whether we should be hired for a job, whether we are entitled to social welfare benefits, whether our online speech should be censored, and whether we should be subject to police intervention. These systems are becoming more ubiquitous, touching upon many aspects of society.

As is the case with the introduction of any new technology, the law is trying to catch up with these emerging developments in machine capability. Since the law is an indispensable tool for ensuring that our rights are protected and vindicated, it is vitally important that it is not left behind – unenforced and ineffective. It will be before the courts that old and new laws will be applied, disputed and litigated for the purpose of safeguarding and guaranteeing our rights in the age of artificial intelligence.

I am working with Aurum Linh, a technologist and product developer, on an exciting new project that seeks to break down knowledge barriers between litigators and technologists so they can work more effectively together on impactful AI-related litigation. We would love for you to get involved too!

What is the project?

Aurum and I are part of an inspiring cohort of technologists, activists, lawyers, and scientists who are working on projects to promote trustworthy AI as part of a Mozilla Fellowship. Our particular project is aimed at producing a set of guides that can help build stronger litigation on AI and human rights.

The first guide will be aimed at individuals who have a technology background, such as technologists, engineers, developers, and computer scientists, and will seek to demystify litigation and how it can be used to protect our rights against harmful AI systems. It will also explain the important role that they, and their expertise, can play in strengthening litigation efforts.

The second guide will be aimed at lawyers and will seek to demystify the technology that may crop up in their cases. We hope that this guide will assist lawyers in effectively identifying and pursuing legal claims challenging human rights violations caused by AI. The guide will also provide further insights on how they can collaborate with technologists in their litigation.

Both guides will be developed through regular consultation with the intended audiences to ensure the resources meet their needs. So, please do read on to find out how you can get involved.

Why do I think the project is important?

As human rights cases will increasingly have an AI element to them, this project seeks to provide information and guidance to lawyers and technologists so that they can learn more about each other’s disciplines and expertise. We hope that this can then strengthen AI-related litigation efforts by fostering greater collaboration and knowledge-sharing between these stakeholders. It is hoped that, by bringing stronger cases, they can then help set precedent that ensures greater transparency, accountability and adherence to human rights standards in the design and use of AI.

I am approaching this project as someone with a legal/litigation background. Aurum, who is approaching our project from a technologist’s perspective, has written a fantastic blog on why they believe these guides are important.

I am passionate about using the courts and the law as a mechanism to improve the world in which we live. For centuries, litigation has been a valuable tool for securing changes in law, practice and public awareness on a variety of issues. There are many examples of ground-breaking court decisions in areas ranging from climate change, arbitrary detention, and the death row phenomena, to gay marriage, abortion, and the right to food. With AI becoming ever more pervasive in our lives, I believe we will increasingly see AI-related rights issues being brought before our courts.

In fact, we can already see such cases before our courts. Last month, for instance, a court in Amsterdam overturned a disproportionate debt claim taken against an individual for €0.05. The Dutch court gleaned that the claim had been processed by an automated system, and it warned the company responsible that it should set up its system in such a way that some human control takes place before a debt summons is made. Similarly, other examples of “Robodebt” systems are currently being challenged before the courts in other jurisdictions as well. In the UK last month, an appeal was also granted in a judicial challenge to the use of facial recognition technology by a police force in Wales and, in the US, a number of recent examples of cases challenging the use of automated systems by public bodies can be found in AI Now’s reports on “Litigating Algorithms” from 2018 and 2019.

Even cases that, on their face, do not strictly deal with a technological issue will need to be litigated and argued within the digital reality in which we live. The deployment of new technologies can mean that harmful societal issues are replicated, embedded or even exacerbated, and the arguments we make before the courts need to be informed by these very real threats. To use the recent example of a case before the US Supreme Court on the justiciability of partisan gerrymandering, Justice Kagan, in her dissenting opinion, warned about the risks to democracy posed by AI-driven gerrymandering. She noted that “big data and modern technology… make today’s gerrymandering altogether different from the crude line drawing of the past.”

How can you get involved?

We want to make sure the guides are as useful and beneficial as possible for the communities that they seek to serve. This is where you come in. We want to hear from lawyers, technologists, software engineers, data scientists, computer scientists and digital rights activists about what they would like to see included in these guides. We would also be delighted to hear from individuals who have experience working on AI-related litigation, and who have lessons or ideas to share with us.

You can get involved by completing a short survey or, if you prefer, you can reach out to me directly by email if you would like to have a chat about the project. We look forward to hearing from you!

A case for knowledge-sharing between technologists and digital rights litigators

By Aurum Linh, 6th December 2019

“Almost no technology has gone so entirely unregulated, for so long, as digital technology.”

Microsoft President, Brad Smith

Big technology companies have become powers of historic proportions. They are in an unprecedented position of power: able to surveil, prioritize, and interfere with the transmission of information to over two billion users in multiple nations. This architecture of surveillance has no basis for comparison in human history.

Domestic regulation has struggled to keep pace with the unprecedented, rapid growth of digital platforms: who operate across borders and have established power on a global scale. Regulatory efforts by data protection, competition and tax authorities worldwide have largely failed to obstruct the underlying drivers of the surveillance-based business model.

It is, therefore, vital that litigators and technologists work together to strategize on how the law can be most effectively harnessed to dismantle these drivers and hold those applying harmful tech to account. As a Mozilla Fellow embedded with the Digital Freedom Fund, I am working on a project that I hope can help break down knowledge barriers between litigators and technologists. Read on to find you how you can get involved too.

Regulating Big Tech

The surveillance-based business models of digital platforms have embedded knowledge asymmetries into the structure of how their products operate. These gaps exist between technology companies and their users, as well as the governments that are supposed to be regulating them. Shoshana Zuboff illustrates this in The Age of Surveillance Capitalism, where she observes that “private surveillance capital has institutionalized asymmetries of knowledge unlike anything ever seen in human history. They know everything about us; we know almost nothing about them.”

Zuboff makes the case that the presence of state surveillance and its capitalist counterpart means that digital technology is separating the citizens in all societies into two groups: the watchers (invisible, unknown and unaccountable) and the watched. Their technologies are opaque by design and foster user ignorance. This has debilitating consequences for democracy, as asymmetries of knowledge indicate asymmetries in power. Whereas most democratic societies have at least some degree of oversight of state surveillance, we currently have almost no regulatory oversight of its privatised counterpart.

In essentially law-free territory, Google has developed its surveillance-based, rights-violating business model. They digitised and stored every book ever printed, regardless of copyright issues. They photographed every street and house on the planet, without asking anyone’s permission. Amnesty International’s recent report, Surveillance Giants, highlights Google and Facebook’s track record of misleading consumers about their privacy, data collection, and advertising targeting practices. During the development of Google Street View in 2010, for example, Google’s photography cars secretly captured private email messages and passwords from unsecured wireless networks. Facebook has acknowledged performing behavioural experiments on groups of people— lifting (or depressing) users’ moods by showing them different posts on their feed. Furthermore, Facebook has acknowledged that it knew about the data abuses of political micro-targeting firm Cambridge Analytica months before the scandal broke. More recently, in early 2019, journalists discovered that Google’s Nest ‘smart home’ devices contained a microphone they failed to inform the public about.

We can see these asymmetries mirrored in the public sector too. ProPublica’s examination of the COMPAS algorithm is a clear example of biased algorithms being used by the state to make life-changing decisions in people’s lives. The algorithm is increasingly being used nationwide in pre-trial and sentencing, the so-called “front-end” of the criminal justice system, and has been found to be significantly biased against Black people.

A high-profile case was that of Eric Loomis, who courts sentenced to the maximum penalty on two counts after reviewing the predictions derived from the COMPAS risk-assessment algorithm, despite his claim that using a proprietary predictive risk assessment in sentencing violated his due process rights. The Wisconsin Supreme Court dismissed the due process claims, effectively affirming the use of predictive assessments in sentencing decisions. Justice Shirley S. Abrahamson noted, “this court’s lack of understanding of COMPAS was a significant problem in the instant case. At oral argument, the court repeatedly questioned both the State’s and the defendant’s counsel about how COMPAS works. Few answers were available.”

In How to Argue with an Algorithm: Lessons from the COMPAS ProPublica Debate, Anne L. Washington notes, “[b]y ignoring the computational procedures that processed the input data, the court dismissed an essential aspect of how algorithms function and overlooked the possibility that accurate data could produce an inaccurate prediction. While concerns about data quality are necessary, they are not sufficient to challenge, defend, nor improve the results of predictive algorithms. How algorithms calculate data is equally worthy of scrutiny as the quality of the data themselves. The arguments in Loomis revealed a need for the legal scholars to be better connected to the cutting-edge reasoning used by data science practitioners.”

In order to meaningfully change the context that allows surveillance-based, human rights violating business models to thrive in the tech sector, lawmakers need to deeply understand what legal requirements will change its fundamental structure for the better. This is rendered near impossible since the tech ecosystem is designed with multiple layers of information opaqueness. One key asymmetry is between lawmakers and litigators and the people who are building the technologies that they are attempting to regulate. The technology industry has become so specialized in its practice, yet so broad in its application, the knowledge gaps cause the regulation to be surface-level and ineffective when measured in terms of impact on the underlying system that allowed ethical violations in the first place. When governments, courts, or regulators do get involved with disciplining these companies, the consequences do not actually hurt them. It does not affect the circumstances that caused the violation, nor does it fundamentally change its structure of operations or influence. An example of this can be found in Amnesty’s recent report;

“In June 2019, the US Federal Trade Commission (FTC) levied a record $5bn penalty against Facebook and imposed a range of new privacy requirements on the company, following an investigation in the wake of the Cambridge Analytica scandal. Although the fine is the largest recorded privacy enforcement action in history, it is still relatively insignificant in comparison to the company’s annual turnover and profits – illustrated by the fact that after the fine was announced, Facebook’s share price went up. More importantly, the settlement did not challenge the underlying model of ubiquitous surveillance and behavioural profiling and targeting. As FTC Commissioner Rohit Chopra stated in a dissenting opinion ‘The settlement imposes no meaningful changes to the company’s structure or financial incentives, which led to these violations. Nor does it include any restrictions on the company’s mass surveillance or advertising tactics.’”

This architecture of surveillance has no basis for comparison in human history. Lawmakers struggle to grasp how its technology works and which problems need to be addressed. Inaction does not reflect a lack of will, so much as a lack of sharing knowledge between bodies of expertise. This system spans entire continents and touches at least a third of the world’s population, yet has gone relatively unregulated for 20 years. There is an urgent need for technologists that can break down the barriers of knowledge that keep meaningful legal action from being taken.

A Project to Facilitate Knowledge-Sharing & Knowledge-Building

In partnership with Mozilla and the Digital Freedom Fund, we are building a network of technologists, data scientists, lawyers, litigators, and digital rights activists. Jonathan is a lawyer based in London who is collaborating with me on this project. You can read his perspective on this project here. With the help of this network, we would like to create two guides that can build the knowledge and expertise of litigators and technologists when it comes to each other’s disciplines, so they can collaborate and coordinate effectively on cases that seek to protect and promote our human rights while holding the “watchers” to account.

If you are a digital rights activist, technologist, or lawyer, you can contribute to this project by taking this survey and getting in touch with us. Otherwise, you can help by sharing these blog posts with your networks. We look forward to hearing from you!

The Practical Challenges of Representing Individuals under the GDPR

By Lori Roussey, 2nd December 2019

The General Data Protection Regulation (GDPR) brought unprecedented opportunities for civil society’s strategic litigators. Yet, as we discussed during the DFF workshop on unlocking the litigation opportunities of the GDPR, how to infuse it in our litigation practices remains uncharted territory. This is particularly true when it comes to how we can best represent individuals before the courts.

First and foremost, the law. Article 80 of the GDPR is twofold. Article 80(1) makes it possible for a not-for-profit to be mandated by a data subject to lodge a complaint with a supervisory authority, or to exercise the right to a judicial remedy on their behalf. It may even enable the not-for-profit to seek compensation on behalf of the data subject, depending on Member State law. Article 80(2) leaves a discretionary power to Member States to give certain organisations the mandate to lodge complaints and seek judicial remedies independently of individuals when they observe that those individuals’ rights have been infringed. To avoid confusion with North American law, none of these paragraphs are setting up a collective compensation mechanism.

The mandate of Article 80(1) seems to hold the promise of the ability to empower and support data subjects to bring more claims through representative organisations. Yet, in practice, building and maintaining a collaboration for several years with an individual requires a wealth of resources that not-for-profits, particularly NGOs, rarely have. More importantly, a mandate to represent vulnerable data subjects (such as children, elderly people, people with a diminishing condition) would still require numerous legal safeguards to be valid.

Furthermore, members of the public usually engage when they have an issue with a service, not because of long term stakes or because they view their own circumstances as a “test case”. Furthermore, if an individual’s specific case is resolved or they become uneasy facing off against a powerful multinational company, they might decide to step away from it or the cause. Due to these factors, employees of not-for-profits tend to put themselves forward as data subjects in strategic cases, but it can put a considerable strain on their relationship with their employer.

The ramifications of the hurdles touched upon above may lead one to conclude that Article 80(2) is the best route for strategic litigation. Yet, participants at DFF’s meeting observed that given its discretionary terms, implementation is fragmented throughout the European Union. This has stripped entire nations from a robust opportunity for securing data subjects’ redress.

Nonetheless, the GDPR does not exist in a vacuum. On 11 April 2018, the European Commission (EC) published its New Deal for Consumers proposal package, comprising a proposal for a directive on representative actions for the protection of the collective interests of consumers. Interestingly, amendments by the Parliament on 26 March 2019 recommend that the Directive should apply when more than two data subjects’ rights are infringed, as long as they may be qualified as consumers. The Council is now analysing the proposal and next year the trilogue will take place. Civil society will have to make sure the GDPR remains expressly provided for in the text, so that we can ally with consumer organisations or request that independent public bodies bring representative actions to protect digital rights. NGOs may even want to push for the broadening of the criteria for who can bring such claim, as data subjects acting as consumers may be better off if more public interest organisations can bring a claim to defend their rights.

In the meantime, the lesson learned from GDPR litigation so far is it is necessary that, in its formal two-year report of May 2020 on the implementation of the GDPR, the European Commission stresses that the current implementation of Article 80 requires improved harmonisation to meaningfully foster and defend data subjects’ rights across jurisdictions.

About the author: Lori Roussey is a lawyer specialised in European data protection law in the context of intelligence and humanitarian data processing activities.