Making Accountability Real: Strategic Litigation
On 30 January 2020, DFF Director Nani Jansen Reventlow delivered the lecture “Making Accountability Real: Strategic Litigation” at the ACM FAT* Conference, a computer science conference with a cross-disciplinary focus on fairness, accountability, and transparency in socio-technical systems. This is a transcript of this talk.
I. Creating the society we want (by setting norms and making them a reality)
How do we create the society that we want to live in?
A central feature of any society is how its members engage with each other. And as anyone who has found themselves in a group of people in a public space knows, human interactions don’t always go entirely smoothly all by themselves. So: we need to set parameters. One of the key foundations for regulating the way we interact with each other is setting norms. Norms can take many shapes and forms, but in most societies, this will include adopting legislation.
But this is not enough. Creating norms or adopting laws is not sufficient: you can have the most beautiful and well-articulated set of norms, the most well-written laws, but if they are not being implemented and enforced, the practical effect could potentially be minimal. Literally, a paper tiger.
Of course, there are many more complications: while in a democracy we elect those who govern us, not all legislation will always reflect what we want our society to look like. And even if it does reflect the way we want the world to look, normative debates and legislation cannot always keep up with societal (and technological!) change. In other words: our set of norms and our laws can be out of sync with reality in many ways.
How norms are lived (or applied) in practice, therefore, is crucial.
How do norms link to practical reality
There are different ways in which we can ensure that legislation and other norms become a practical reality. And sometimes, we can even use those methods to ensure that some norms (or rules) we don’t agree with don’t become reality.
Civil society plays a crucial role in this process: through advocacy, campaigning and lobbying they can push for the implementation of existing norms we want to see lived in practice or push for change of those we don’t want.
One other avenue is using the courts as a guarantor of norms or even a tool for change, which is the focus of my talk today.
II. The courts as guarantors of our rights and freedoms and the role of strategic litigation
The courts as guarantors of our rights and freedoms
The courts can be important guarantors of our rights and freedoms. There are numerous examples of situations in which they provided the type of protection the legislature was unable or unwilling to provide. I will highlight two here.
First, an example from the US. Same sex marriage had been recognised under the law in 11 European countries when this still wasn’t the case in the United States. I am proud to say that the country of my primary nationality, the Netherlands, was the first to legislate for marriage equality in 2001. Many other countries followed suit, but the United States did not. Until, in June 2015, the US Supreme Court decided in the case of Obergefell v. Hodges that: “No union is more profound than marriage, for it embodies the highest ideals of love, fidelity, devotion, sacrifice, and family. In forming a marital union, two people become something greater than once they were… [The petitioners’] hope is not to be condemned to live in loneliness, excluded from one of civilization’s oldest institutions. They ask for equal dignity in the eyes of the law. The Constitution grants them that right.” There where the legislature refused to advance equal marriage rights for all, in line with the shifting opinions in society on this point, the courts stepped in –– enabled by a long and strategic litigation, advocacy and diplomacy campaign – to make this possible.
The second, perhaps less well known, example is Malone v. United Kingdom, where the European Court of Human Rights ruled that phone tapping without an express basis in a law that indicated “with reasonable clarity the scope and manner of exercise of the relevant discretion conferred on the public authorities” was a violation of the right to privacy. This resulted in the UK putting in place a law that required a warrant before communications could be intercepted. This judgment is central to past and present litigation against state mass surveillance regimes, including European Court cases against surveillance regimes in Russia, Hungary, Sweden and the United Kingdom.
These two examples illustrate how the courts can help us guarantee our rights and freedoms where the legislature lets us down. This includes bringing our normative framework in line with changes in society, and protecting us from violations of our rights due to a failing legal framework or sometimes even the total absence of legislation.
For courts to exercise this role, they need to be given the opportunity: they have to have cases brought before them. Cases can end up before courts by chance or with an intended purpose or a specific aim. We will now take a closer look at situations when cases can help bring about bigger social change: strategic litigation.
Strategic litigation as a tool for social change
What is referred to as “strategic litigation” comes with a variety of labels: impact litigation, tactical litigation, test case litigation, public interest litigation and even “radical lawyering”. There are also many definitions describing what this kind of litigation means, each with a different focus on specific aspects of the litigation work being done.
Labels can of course matter and for those interested in the finer details of this, I am happy to chat further after this talk. But for now, I would like to focus on the three main characteristics that are common to using litigation as an instrument for change and that make a court case strategic.
First: the case is aimed at bringing about change. This can look at many different things: a change in the law, change in the application and implementation of the law, or change in the wider policies around a specific issue.
Second: the impact of the case is intended to go beyond the individual or group acting as claimants or: those bringing the case. The case is not just litigated to get results for those who brought the case, but for a broader group.
Third: the case is part of a wider strategy or movement. This last element is crucial: litigation that is strategic is more than a court case alone –– it is one of the many tools in the toolbox being used for creating the society we want and employed in tandem with other efforts such as advocacy, lobbying and campaigning.
Strategic litigation can be a helpful tool for working to a variety of goals, including:
- Changing law or policy, as we have seen in the Digital Rights Ireland case won before the Court of Justice of the European Union, which held that the Data Retention Directive, which forced internet and telecom providers in Europe to collect a variety of personal data from users and retain those data for six months to two years, was invalid under EU law.
- Changing practice, as we saw in a US case brought by a number of teachers from Houston against the use of a statistical model, called the Education Value-Added Assessment System (EVAAS), that was used to assess a teachers’ performance. This model was used as the basis for 221 teacher dismissals in one year, despite it being opaque, flawed and fragile. The case was settled before it could go to full trial, but as part of that settlement the Houston Independent Schools District reportedly agreed to cease the use of the model to make personnel decisions.
- Truth telling and transparency: here I can point to some exciting litigation that has sought to get access to information on algorithms used by government institutions, including algorithms used to assign judges to cases and to conduct welfare needs assessments.
- Setting safeguards for our rights: Max Schrems’ work in challenging the transfer of his data (and that of EU citizens) to the US by Facebook is worth mentioning here, work which resulted in the invalidation of the Safe Harbor arrangement in 2015. This arrangement governed, at the time, data transfers between the EU and the US. This battle is ongoing, with his new challenge to the EU-US privacy shield.
- Raising public awareness about problematic issues within our society, as the challenge brought by the NGO Liberty in the United Kingdom to the police use of facial recognition has done, a case that is currently on appeal.
All of these objectives are applicable in our current context, where automated decision-making is just one of the many new frontiers we and our legislators need to keep up with. The courts need to keep up with all these new developments as well, and, as the guarantors of our rights, we need to take them with us in this changing landscape and give them the opportunity to engage with these issues.
In doing so, we do need to make sure that we give the courts an opportunity to engage with the changing landscape in a meaningful way. One way to increase the chances of that happening is by carefully choosing our objectives for litigation.
III. What can we litigate on? Red lines and safeguards
So: what can we litigate on? And what should we litigate on? The choice of battle can define whether you can win or not, and whether you can win in the right way. Defining the right scope for a strategic case is crucial.
Let me illustrate this with two examples: one of litigation overreach and one in which incremental change was achieved through many small steps.
We looked at the US Supreme Court decision in Obergefell earlier as an example where a court brought the legal framework in line with changed norms in society. That decision was the result of a carefully crafted strategy that unfolded over a longer period of time after a previous direct bid to get gay marriage recognised as a constitutional right in 1971 (Baker v. Nelson) was unsuccessful. I do not know if litigators in Europe failed to learn from that 1971 effort or thought the landscape here would be fundamentally different from that in the US, but the first case on same sex marriage that reached the European Court of Human Rights in 2010, Schalk and Kopf v. Austria was unsuccessful, with the court finding that the right to marry under “the Convention does not impose an obligation on the respondent Government to grant a same-sex couple such as the applicants access to marriage.” Campaigners and lobbyists definitely learned from it and most legislative progress on same sex marriage in Europe since has been the result of carful campaigning and lobbying work. But litigators also paid attention and, in the cases that were subsequently brought, the Court has slowly delivered stronger and stronger decisions on recognising civil unions for same sex couples and, hopefully someday, it will also recognise same-sex marriage under the right to marry like the US Supreme Court has done.
A great example of a carefully crafted strategy for incremental change I would like to highlight is also from the United States, namely the death penalty litigation work that continues till this very day. A number of organisations work on this issue, but they carefully coordinate their strategies with each other to work towards a joint goal. By starting with court challenges to the execution of juveniles and those with disabilities, litigation then focused on the mandatory death penalty, the death row phenomenon, methods of execution, and racial discrimination in death penalty sentencing, before head on challenging the death penalty itself. This challenge was successful, by the way, but subsequently reversed by legislation –– unfortunately not an uncommon phenomenon –– the reason why this important work continues. But by slowly chipping away at the block, starting with the battles that had a good chance of being won and then moving on to the next station, they managed to achieve a result that nearly certainly would have been impossible to achieve with one giant leap.
What these offline examples teach us is that, while time can be of the essence, especially when we are dealing with rapid technological and social change, taking a moment to thoroughly consider what we have a good chance of successfully taking on can make the difference between the litigation helping advance our rights or setting us a step back.
When looking at possible litigation objectives for algorithmic decision making, we can make a distinction between two main categories: regulation (or: how do we make the use of algorithmic decision-making fair, accountable and transparent) and drawing so-called “red lines” (or: should we be using AI at all).
Red lines do not appear to be focus of debate for litigators at the moment and I am wondering why there is not more discussion –– or even agreement –– on this issue. One of the few proposals I’ve come across is Germany’s Data Ethics Commission, which has suggested a complete or partial ban on AI applications with “an untenable potential for harm”. This sets out broad parameters to define where we should draw the line, but I have not yet come across a concise setting out or mapping of the various considerations that should come into play. This is a shame as this is very much where the foundational question of what the society we want to live in should look like takes center stage, and it is a debate that ethicists should very much be a part of. It is a fundamental question on how we want to shape certain processes in our society and once we have a clear vision of where the boundaries lie then we can litigate to ensure these boundaries are also adhered to.
Litigation that looks at regulation will help define what fairness, accountability and transparency looks like in practice, by helping to define safeguards for the use of algorithmic solutions.
Questions we could get clarified through litigation include what effective oversight looks like, or: What should human rights impact assessments contain in practice? In short: how do we make sure the safeguards that are written into legislation are meaningful, effective and protect our rights in practice?
At this point in time, there are not yet good case examples to draw upon. But an analogy can serve as an example of what might happen when existing standards of fairness, transparency and accountability are applied to a new context.
Many of you will be familiar with the Carpenter case, a case in which the Electronic Frontier Foundation intervened with an amicus brief. In this case, the Supreme Court held that, if the police wanted to access cell site location information from a cell phone company—cell site location information being the detailed geolocation information generated by a cellphone’s communication with cell towers –– they needed a warrant, so prior authorisation from a judge. This was a modern day application of the US Constitution’s Fourth Amendment protection, which protects citizens from unreasonable searches and seizures.
The Carpenter case illustrates not only how we can use litigation to get clarity on parameters for regulation, but also how the courts can play an important role in applying existing frameworks in a new context.
IV. What frameworks can we use for litigation now?
An often-heard response to new technological developments is to adopt new legislation, but we do not always need that to make sure our society remains a just one. We first need to properly investigate what frameworks and tools we already have in place and consider how we can make optimal use of them.
As mentioned before, the courts can play a crucial role in applying existing frameworks or legislation –– the legislature might struggle to keep up with developments, but –– as things stand –– laws are often drafted in a tech neutral way, giving the courts leeway to make these situation-specific adaptations themselves. In order to do so, we do need to give the courts a chance. Let’s have a look at the frameworks we already have that can be used to litigate issues of fairness, accountability and transparency.
A recent, but still fairly underutilised framework in litigation is of course the GDPR or General Data Protection Regulation. It offers a number of entry points for litigating for not only regulation, but also on the issue of red lines:
- Article 22, which says that, unless they give explicit consent or there is a contractual or legal basis, data subjects have a right “not to be subject to a decision based solely on automated processing, including profiling” if it has legal effects that affect them. This, for example, could be a basis to challenge automated systems that autonomously make certain determinations against us. These can be systems that automatically dismisses our application in the hiring process, change our credit score, or censor our online content.
- Article 5(1)(c) limits the scope of data that are allowed to be processed by AI systems: they cannot process more than what is necessary for the purposes that the data were originally collected. This could assist in litigating on, for example, systems that feed off a disproportionate number of data points to achieve their objectives.
- Article 5(1)(d), which requires that systems do not process inaccurate or “old” or “out of date” data, could be used to challenge systems that make inaccurate inferences about identifiable individuals from seemingly unrelated data.
Then, there are numerous frameworks that can be applicable, depending on the context in which we’re operating:
- Industry- or sector-specific laws: some of these might be limited to a specific type of technology, such as aviation law or self-driving car legislation that could be helpful frameworks in imposing liability and securing accountability for harm caused by systems used in these specific contexts. However, other tech-neutral laws, such as those around due process, public procurement, and product liability could also play a role in remedying rights violations caused by the use of AI.
- Anti-discrimination legislation, such as the UK Equality Act, could be an entry point when for example challenging systems used by employers or public bodies that treat people differently or disadvantage them because of their age, disability, gender, ethnicity, nationality, religion or belief, sex, sexual orientation, marital status or maternity status.
- Administrative law can also provide opportunities to ensure that AI-driven decisions made in the public sector are fair, transparent, relevant and understandable. Here, we can think of cases applying the duty to give reasons in the context of algorithm-assisted decision making by public bodies.
V. Building the society we want (and using the law to make that happen)
These are exciting times. The landscape continues to evolve at a dizzying pace and even tech skeptics will sometimes have to marvel at what has become possible or appears to be within reach.
The possibilities seem endless, but: we have to ensure that we safeguard our human rights in the process. The courts can be an important guarantor for this, if we give them the opportunity to play this role. In doing so, we need to be as entrepreneurial as we are in pursuing further technological change. This means being creative in using the frameworks that we have, and being strategic in what we pursue in the courts and how.
That way, together, we can truly build the society we want.