Litigating algorithms: taking the conversation from North America to Europe and beyond

By Nani Jansen Reventlow, 18th November 2019

Who is doing litigation work on the human rights impact of algorithmic decision making? What lessons can be learned from these cases and are there any best practices we can distill from them? What are the cases we would like to see brought in 2020?

These questions drove the agenda of a two-day meeting with litigators from the US, Europe and Latin America, organised by the Digital Freedom Fund in partnership with the AI Now Institute, and kindly hosted by Mozilla at their Berlin office. The meeting brought together litigators with experience in challenging algorithmic decision making through the courts, litigators with an interest in undertaking this work, as well as participants with a background in policy work in the field of AI.

The agenda was designed by taking AI Now’s 2018 and 2019 “Litigating Algorithms” meetings as a starting point. These meetings facilitated knowledge sharing between litigators, academics and technologists on how to meet the challenges involved in litigating against automated systems that are applied across a variety of different contexts, from health and employment to the criminal justice system. The agenda for the meeting in Berlin combined this approach with the collaborative working methods that are central to all DFF litigation meetings. This approach allowed participants to not only get to know each other’s work, but also zoom in on transferable lessons learned that can help build stronger cases going forward. While those present worked on three different continents, within distinct national legal frameworks, and on very different cases –– ranging from a challenge to a misleading display of search results on Google to defending individuals excluded from home care due to a new algorithmic assessment by Medicaid –– there was a shared sense that there were more similarities than differences between experiences. This applied both when looking at lessons learned and best practices, and when considering the obstacles for bringing further cases.

Dedicated time was spent on critical reflection on ongoing litigation, leading to a number of insights into how those cases could be further strengthened. A “needs-offers” exercise resulted in a rich array of knowledge and information participants were willing to share with others in the room, as well as a frank listing of items with which they could use further support.

As discussed in our recent blog post, the gathering was organised as part of DFF’s efforts to lower the threshold for strategic litigation on AI and human rights, and fits into a broader framework of activities in this area. A joint AI Now – DFF publication, mapping the cases discussed at the workshop, is forthcoming in the Spring. Looking further ahead, DFF is working on organising an international meeting on litigation on AI and human rights to facilitate knowledge sharing and the brainstorming of new opportunities across the globe on this crucial digital rights issue. We will be consulting with our network on how to best shape this event –– if you have any thoughts or suggestions you would like to share with us, please do get in touch!

Lowering the threshold for strategic litigation on AI and human rights

By Nani Jansen Reventlow, 8th November 2019

DFF commenced its operations in 2017 by starting a strategy process. This process, which is ongoing to this day, consisted of a consultation of all key stakeholders working on digital rights in Europe, asking them what their priorities were and how DFF could best support them. Following DFF’s first strategy meeting in February 2018, this process led to the formulation of three thematic focus areas for DFF’s work. First, advancing individuals’ ability to exercise their right to privacy; second, protecting and promoting the free flow of information online; and third, ensuring accountability, transparency and the adherence to human rights standards in the use and design of technology.

The debate about “AI” – also framed as one on machine learning or automated decision making – has become a hot topic for discussion over the recent period. As foreshadowed by developments in 2018, AI and human rights was a much-debated topic at our 2019 strategy meeting, which brought together 48 organisations from across Europe working on digital rights.

As had been the case during the Virtual Strategy Design Jam we hosted on the use of AI in law enforcement in the runup to the strategy meeting, the topic was actively debated, but we did not see a corresponding uptake of the issue when it came to litigation. In other words: there was a clear sense of urgency to address the potential negative impact the use of AI could have on human rights, and an interest in pursuing litigation to address this, but not many cases were brought. Following the discussions at our strategy meeting closely and listening to other input from the field, it became clear that many litigators had difficulty identifying the issues on which to litigate and suitable entry points to do so.

The development and use of technology in all aspects of our lives meanwhile is increasing, making the need to confront and challenge any negative human rights impacts that result from it ever more urgent. Strategic litigation can be an important instrument in this fight. In light of this, DFF is seeking to help lower the threshold for litigators to step into this space and help safeguard our human rights when AI is at play.

This November, DFF is hosting a litigators’ meeting together with the AI Now Institute, building on their “Litigating Algorithms” series (see here and here for the meeting reports) which brought together litigators, academics and technologists to share experiences in litigation on the impact of the use of AI across a variety of different contexts. The meeting, which will be hosted at Mozilla’s Berlin office, will bring together US and European litigators with experience in challenging algorithmic decision making through the courts as well as those with an interest in doing so. Besides sharing best practices, participants will brainstorm new case ideas and identify concrete plans for next steps.

In October, DFF’s Legal Adviser joined forces with technologist Aurum Linh, a Mozilla Fellow, to work on a set of guides to help build stronger litigation on AI and human rights that can help set precedents that ensure greater transparency, accountability and adherence to human rights standards in the design and use of AI. The first guide will be aimed at demystifying litigation for digital rights activists, technologists and data scientists, who will often be at the forefront of identifying the situations and fact patterns that are ripe for AI-related human rights challenges through litigation. The second guide will be aimed at lawyers working across different practice areas – such as criminal, employment, immigration or election law – who could have clients whose rights have been violated by the development and use of AI. The guide will provide these legal practitioners with the minimum viable information necessary to effectively identify and pursue legal claims challenging human rights violations caused by AI. The guides will be developed through regular consultation with the intended audiences and organisations already looking at litigating on AI to ensure the resources meet their needs. Watch this space for updates and learn how you can join the conversation.

Both strands of activities will build on each other and weave into DFF’s ongoing dialogue with the field. Following the November “European litigating algorithms” meeting, a report will be published in early Spring to share lessons learned with the field. In February 2020, a dedicated consultation will be held to test the concepts of the litigation guides. This all will feed into the publication of the guides in the second half of 2020 and an international meeting for litigators to share experiences on litigating on this topic across different regions.

… and, we hope, many exciting cases! We look forward to supporting some of the exciting work that will be developed over the coming months and are always happy to hear from you and discuss your ideas.

Future-proofing our digital rights at MozFest

By Jonathan McCully, 29th October 2019

Over the weekend, the Mozilla Foundation held its tenth annual MozFest. The festival brought together educators, activists, technologists, researchers, artists, and young people to explore, discuss and debate how we can help secure “healthier Artificial Intelligence.”

On Sunday, the Digital Freedom Fund co-facilitated an interactive workshop with the Oxford Information Labs on “Future-Proofing our Digital Rights.” This hour-long workshop brought together lawyers, academics, technologists and digital rights activists to consider the digital rights battles that may lie ahead of us and what steps we can take in the short and medium term to help steer us towards a future we would like to see.

The session centred around three questions: what digital rights threats or challenges might we see in five years’ time? In our ideal future, what do we want to see in five years’ time? And what actions should we take now to help mitigate the threats and challenges, and steer us towards our ideal future?

Participants identified a range of horizon threats and challenges, from increased uptake of Artificial Intelligence and greater deployment of biometric detection systems, to a growing digital divide and political participation being conditioned on the giving away of personal data. A number of groups converged on the threat that tech companies become the lawmakers, and increasingly write the rules by which they are to be regulated. The disempowerment of users by moving digital rights issues out of the public law and into the private law context would make it harder to obtain redress for digital rights violations.

When talking about ideal futures, participants identified a number of scenarios, including the banning of facial recognition technologies, having a diversity of online platforms with no dominant players, securing proactive and well-resourced regulators that are not captured by industry, and  promoting “white boxes” instead of “black boxes” ­– systems designed with accountability, transparency and contestability in mind.

So, what can we do now to help us prepare for these future threats and put us on track towards a future that better protects our digital rights? One group discussed the role that the education sector could play in improving AI literacy and awareness. Another group talked about how we could push for law reform that ensures that publicly funded technology cannot be privatised and benefit from trade secret protections. Participants also noted that law moves slower than technological developments, and that we should push for more forward-looking laws that safeguard against new technologies being applied in legal or regulatory gaps.

These conversations build upon the work we did last year on future-proofing our digital rights, which included an essay series that you can read here and a workshop we held in Berlin last September. It was exciting to bring this topic to a new audience, and it brought a number of interesting perspectives to the fore. If you would like to join in the conversation or write a guest blog about how we can future-proof our digital rights, get in touch! We would love to hear from you.