Atlas Lab – Breaking the Black Box of Law & Tech

Atlas Lab – Breaking the Black Box of Law & Tech

By Jonathan McCully, 18th January 2021

Decorative image

Just over a year ago, Aurum Linh and I spoke about our plans to develop a tool to demystify two processes that can seem daunting, even elusive: machine learning and human rights litigation.

We believe that understanding these processes a bit better can unlock opportunities for collaborative action in challenging the human rights harms that are perpetrated or contributed to by technologies that are often referred to as “artificial intelligence” or “AI.”

To help break open these processes, we wanted to build a tool that can deconstruct what it means to build a machine learning system and take a human rights case to court.

Today, we are launching Atlas Lab — a project that seeks to build a knowledge base that lawyers, activists and human rights defenders can use when working on the front lines of AI and human rights litigation.

As automated decision making becomes a routine part of our everyday lives, it will also play a role in critical litigation around privacy, labor, due process, and other human rights issues. We need strong precedents to ensure our rights are protected and promoted.

Decorative image

There are, however, limited resources to bridge the gap of these disciplines and we hope our site can go some way towards doing that. The site consists of a library of explainer articles on machine learning and litigation, and a small database of summaries of court decisions that are at the intersection of these two worlds.

The process of building Atlas Lab has been a long but fulfilling one, and we have benefitted hugely from those already taking action to protect our rights against wayward machines.

Aurum and I worked on this project through a Mozilla Fellowship, with Aurum being hosted within the Digital Freedom Fund. Over the course of the Fellowship, we had the opportunity to put a prototype of the tool to an experienced group of lawyers, technologists and activists already exploring the legal implications of machine learning and similar tools at DFF’s annual strategy meeting in 2020. We were also privileged to listen and learn from computer scientists, litigators and individuals who have been affected by these technologies across several events, including our dedicated RightsCon session, and DFF’s meeting on COVID-19 and AI. These conversations have greatly enriched the site, but any errors or omissions remain our own.

Decorative image

In the coming months, unaffiliated to the activities of DFF, Aurum will be organizing an event series focused on the voices of those whose lives have been affected by algorithmic decision making. The series will revolve around the criminal justice system, immigration agencies, and child welfare services.

We hope that this resource can help lawyers, public interest technologists and activists in developing their work at the intersection of emerging technologies and human rights litigation. We know that it is in no way complete, so if you notice any anything that is incorrect or missing from our explanations, please let us know! We welcome any input that can help make the content of the site as useful and informative as possible. To stay updated, reach out to us on our website and follow along on Twitter and LinkedIn!

Artwork by Cynthia Alonso and Justina Leston