Who is doing litigation work on the human rights impact of algorithmic decision making? What lessons can be learned from these cases and are there any best practices we can distill from them? What are the cases we would like to see brought in 2020?
These questions drove the agenda of a two-day meeting with litigators from the US, Europe and Latin America, organised by the Digital Freedom Fund in partnership with the AI Now Institute, and kindly hosted by Mozilla at their Berlin office. The meeting brought together litigators with experience in challenging algorithmic decision making through the courts, litigators with an interest in undertaking this work, as well as participants with a background in policy work in the field of AI.
The agenda was designed by taking AI Now’s 2018 and 2019 “Litigating Algorithms” meetings as a starting point. These meetings facilitated knowledge sharing between litigators, academics and technologists on how to meet the challenges involved in litigating against automated systems that are applied across a variety of different contexts, from health and employment to the criminal justice system. The agenda for the meeting in Berlin combined this approach with the collaborative working methods that are central to all DFF litigation meetings. This approach allowed participants to not only get to know each other’s work, but also zoom in on transferable lessons learned that can help build stronger cases going forward. While those present worked on three different continents, within distinct national legal frameworks, and on very different cases –– ranging from a challenge to a misleading display of search results on Google to defending individuals excluded from home care due to a new algorithmic assessment by Medicaid –– there was a shared sense that there were more similarities than differences between experiences. This applied both when looking at lessons learned and best practices, and when considering the obstacles for bringing further cases.
Dedicated time was spent on critical reflection on ongoing litigation, leading to a number of insights into how those cases could be further strengthened. A “needs-offers” exercise resulted in a rich array of knowledge and information participants were willing to share with others in the room, as well as a frank listing of items with which they could use further support.
As discussed in our recent blog post, the gathering was organised as part of DFF’s efforts to lower the threshold for strategic litigation on AI and human rights, and fits into a broader framework of activities in this area. A joint AI Now – DFF publication, mapping the cases discussed at the workshop, is forthcoming in the Spring. Looking further ahead, DFF is working on organising an international meeting on litigation on AI and human rights to facilitate knowledge sharing and the brainstorming of new opportunities across the globe on this crucial digital rights issue. We will be consulting with our network on how to best shape this event –– if you have any thoughts or suggestions you would like to share with us, please do get in touch!