Accessing Justice in the Age of AI

By Alexander Ottosson, 9th April 2020

Facade of a court building

There is no doubt that the continuous development of artificial intelligence (AI) has a lot of upsides. It has the potential to revolutionise our industries, streamline our healthcare systems, mitigate climate change, reduce the cost of public services, and much more.

But there is simultaneously a great deal of risks involved in its development and use – including the risk of unlawful discrimination, invasion of privacy, interferences with freedom of expression, and failure to uphold due process and the right to a fair trial.

I recently had the opportunity to discuss several of these AI-related challenges at Digital Freedom Fund’s annual strategy meeting in Berlin. One of the sessions, which I had the privilege to facilitate, concerned the relationship between the use of AI and access to justice. The aim of the session was to ascertain if and to what extent best practices developed in the context of cases concerning secret surveillance could be transposed to cases concerning government use of AI.

As the Irish judge Sir James Mathew quite aptly put it already back in the Victorian era: “[j]ustice is open to all, like the Ritz hotel.”

Inherently in cases concerning both secret surveillance and AI, legal challenges are not only complex on account of substantive law, but are also ripe with technical and evidentiary difficulties. The transfer of rights from paper to practice is thus costly and, in many cases, unrealistic for most people. As the Irish judge Sir James Mathew quite aptly put it already back in the Victorian era: “[j]ustice is open to all, like the Ritz hotel.”

Like the Ritz – or Radisson or Best Western or indeed any hotel – justice is open to all merely in theory. In reality, access to justice is often limited due to, among other things, structural injustices, inadequate financial resources, lack of knowledge, time-constraints or ineffective remedies.

What we set out to explore during our session was how access to justice, in the context of AI, could be improved, so as to move us closer to the ideal that justice should be open to all not only in theory, but also in practice. To that end, the following lessons from the secret surveillance context are of particular note.

Transparency to Tackle Abuse

In cases concerning secret surveillance, the European Court of Human Rights (ECtHR) has held that “notification of surveillance measures is inextricably linked to the effectiveness of remedies before the courts and hence to the existence of effective safeguards against [abuse]” (see Roman Zakharov v. Russia [GC], § 234). This certainly applies in the AI context as well.

It is essential for access to justice that individuals can easily obtain information on when and how their government uses AI. When a decision in an individual case is taken by an AI system without human intervention or by a human relying on output data from such a system, the individual should be notified of this in the decision.

It is essential for access to justice that individuals can easily obtain information on when and how their government uses AI

Notification gives the individual a valuable opportunity to seek further information on the AI being used and to obtain appropriate legal and technical advice on appealing the decision and challenging errors, flaws or biases.

In case of automated decisions, notification is also a prerequisite for the individual’s possibility to exercise the right to obtain human intervention.

As for the type of information needed, details about the AI system is not always required. Sometimes, details of the algorithms and data structures may, however, be essential to making one’s case – information which is not always easy to come by.

A regular objection seems to be that the software is protected as proprietary trade secrets and thus shielded from review. To counter this objection, one might draw from the US experience, where procedural due process arguments have proven adept at dismantling such claims.

Another obstacle to acquiring more detailed information arises when the AI system in question operates as a so-called “black box”. These systems are designed in such a way that it is not discernible how inputs become outputs. That is, of course, highly problematic since it limits the possibilities of verifying, among other things, the legality and proportionality of a particular interference caused by such systems.

To circumvent the issue, the information deficit should be viewed not as an obstacle to challenging the fundamental rights implications of a “black box” system, but rather as the basis on which to challenge it.

An Ombudsperson for AI?

Another approach to increasing access to justice is to put in place alternatives to expensive and complex court proceedings. To that end, states should consider establishing a national ombudsperson for AI with which individuals could lodge complaints without cost – or, in the alternative, embedding such a specialised function within an existing body.

States should consider establishing a national ombudsperson for AI with which individuals could lodge complaints without cost

In either case, such a remedy would serve as an accessible alternative to court proceeding and the specialisation would also be conducive to ensuring that complaints can be reviewed by someone with the necessary technical expertise.

As for the powers that should be afforded to the ombudsperson, one can again find guidance from cases on secret surveillance. In this context, the ECtHR has laid down a requirement that the body in charge of overseeing state surveillance activities should have the power to “stop or remedy the detected breaches of law and to bring those responsible to account.” This should similarly apply to the ombudsperson for AI.

Moreover, if the ombudsperson finds that an AI system has breached an individual’s fundamental rights, it should ideally be able to order the suspension of further use of the impugned system until appropriate safeguards against further breaches have been put in place.

Further Action: Snowballing Efforts

The takeaways from the session outlined above can and should be further explored, developed and complemented.

In the coming years, additional legal and ethical lines will be drawn with respect to AI; policies will be adopted and legislation will be passed. It is essential that measures to increase access to justice are an integral part of these efforts.

In the end, our session concluded with a sense that while a lot can be done to increase access to justice in the context of AI, more focus on the issue is required. It is time to start snowballing our efforts.

For my part, I certainly hope to see and take part in more discussions, research and other initiatives focusing on this issue in the future. Only through careful consideration and decisive action can we make sure that justice eventually becomes open to all.

Alexander Ottosson is an associate at the Stockholm-based public interest law firm Centrum för rättvisa (Centre for Justice).

Image by Brett Sayles on Pexels