Virtual Strategy Design Jam: brainstorming litigation to challenge artificial intelligence in law enforcement

By Jonathan McCully, 7th February 2019

How can we use litigation to push back against the use of artificial intelligence in predictive policing? How can we harness the courts to keep reliance on algorithms in check in the context of judicial decision-making? What legal strategies can we adopt to help secure greater transparency and accountability around the use of algorithms in law enforcement? These questions were considered during the DFF’s first “Virtual Strategy Design Jam” last week, an online session that brought together fourteen digital rights litigators from across Europe to workshop potential litigation strategies on the issue of AI and law enforcement.

Recent events have highlighted the human rights concerns around the use of algorithms in the criminal justice system. This week, one of the NGOs in our network, Liberty, published a detailed report on fourteen police forces in the UK that have used or are planning to use non-transparent, and likely biased, algorithmic tools to predict who will commit or will be a victim of crime. Amazon has come under fire in recent months for marketing and selling its facial recognition software, Rekognition, to police departments and federal agencies despite studies showing gender and racial bias in the underlying technology.

This comes over two years after the high-profile ProPublica study that exposed potential bias in the COMPAS recidivism tool, a secret algorithm designed by Northpointe that was being used by judges, probation and parole officers in the United States to assess a criminal defendant’s likelihood of reoffending. The COMPAS tool was later subject to litigation before the Wisconsin Supreme Court, in a case involving an individual whose plea deal was rejected by a judge and replaced by a higher sentence partly due to his COMPAS risk score. The Wisconsin Supreme Court upheld this sentence and refused to find a violation of the defendant’s due process rights, but It did call for algorithmic risk assessments to be accompanied by a warning to judges that they should be sceptical about their value.

There is great potential in strategic litigation for challenging the creeping use of artificial intelligence in law enforcement. This is one of the reasons why DFF has encouraged grant applications for cases that can help “ensure accountability, transparency and the adherence to human rights standards in the use and design of technology.” It is also one of the reasons we decided to host a Virtual Strategy Design Jam.

This jam was a pilot for DFF, in which we sought to explore the potential in using online communication and e-conferencing tools to bring litigators from across Europe together to co-work on litigation strategies. The jam was designed and delivered with the help of Allen Gunn from Aspiration, and our break-out rooms were expertly facilitated by Fieke Jansen and Fanny Hidvegi. It proved to be a useful way of workshopping, and one participant observed that it was a “very productive way of working internationally.”

During the jam, participants developed six blueprints for potential strategic cases. These blueprints explore a range of claimants, legal avenues, evidence gathering efforts and remedies that can help increase the chances of a legal victory that would limit the use of AI in predictive policing and judicial decision-making. A number of these strategies also seek to secure greater transparency and accountability in this increasingly inscrutable area of criminal justice by engaging with freedom of information litigation.

At the end of the session, a number of participants expressed an interest in taking some of this work forward with a view to seeing some of these strategies become real-life cases. We look forward to seeing how this develops and supporting their efforts.

Let us know if you think another area of digital rights could benefit from this kind of online strategising session. We would love to hear from you!