How can we minimise the negative impact of the use of algorithms and machine learning on our human rights? What cases are litigators in the US and Europe working on to take on this important issue?
Today’s fourth installment in DFF’s transatlantic call series addressed machine learning and human rights. EFF kicked off the conversation by telling participants about their work on the use of machine learning in various aspects of the criminal justice system. This includes the use of algorithms for determining risk of reoffending as well as determining the guilt of an alleged offender. EFF spoke about an ongoing case (California v. Johnson) in which they filed an amicus brief, arguing that criminal defendants should be able to scrutinise algorithms that have been used by prosecutors to secure a conviction.
A common thread in EFF’s work in this area is the need to ensure that government use of algorithms in decision-making processes is conducted in as fair and transparent a manner as possible. This is similar to the approach taken by PILP, who are challenging the use by government agencies in the Netherlands of “risk profiling” algorithms. These profiles are used to detect the likelihood of individuals committing fraud. The system, called “SyRI“, involves the pooling of citizens’ data that has been collected by the state in a variety of different contexts, after which algorithms calculate whether certain citizens pose a ‘risk’ of committing abuse, non-compliance or even fraud in the context of social security, tax payments, and labour law.
PILP shared how the SyRI system has a disproportionate impact on the poorest parts of the population. The UN Special Rapporteur on extreme poverty and human rights, Philip Alston, has submitted an amicus brief in the case, saying that the SyRI system poses “significant potential threats to human rights, in particular for the poorest in society”.
Following a further exchange on these cases and other work being done in this area, DFF also shared details of its ongoing projects on Artificial Intelligence and human rights. In November, DFF is organising a workshop together with the AI Now Institute to explore litigation opportunities to limit the negative impact on human rights posed by AI. Also, DFF’s Legal Adviser Jonathan McCully is working together with a technologist in the context of a Mozilla Fellowship to create two resources — one for lawyers and another for technologists, data scientists, and digital rights activists — that provide tools for working together effectively when taking cases against human rights violations caused by AI.
Our next transatlantic call will take place on 13 November and will focus on content moderation. It is not too late to join: get in touch to register your attendance!