Virtual Strategy Design Jam: brainstorming litigation to challenge artificial intelligence in law enforcement

By Jonathan McCully, 7th February 2019

How can we use litigation to push back against the use of artificial intelligence in predictive policing? How can we harness the courts to keep reliance on algorithms in check in the context of judicial decision-making? What legal strategies can we adopt to help secure greater transparency and accountability around the use of algorithms in law enforcement? These questions were considered during the DFF’s first “Virtual Strategy Design Jam” last week, an online session that brought together fourteen digital rights litigators from across Europe to workshop potential litigation strategies on the issue of AI and law enforcement.

Recent events have highlighted the human rights concerns around the use of algorithms in the criminal justice system. This week, one of the NGOs in our network, Liberty, published a detailed report on fourteen police forces in the UK that have used or are planning to use non-transparent, and likely biased, algorithmic tools to predict who will commit or will be a victim of crime. Amazon has come under fire in recent months for marketing and selling its facial recognition software, Rekognition, to police departments and federal agencies despite studies showing gender and racial bias in the underlying technology.

This comes over two years after the high-profile ProPublica study that exposed potential bias in the COMPAS recidivism tool, a secret algorithm designed by Northpointe that was being used by judges, probation and parole officers in the United States to assess a criminal defendant’s likelihood of reoffending. The COMPAS tool was later subject to litigation before the Wisconsin Supreme Court, in a case involving an individual whose plea deal was rejected by a judge and replaced by a higher sentence partly due to his COMPAS risk score. The Wisconsin Supreme Court upheld this sentence and refused to find a violation of the defendant’s due process rights, but It did call for algorithmic risk assessments to be accompanied by a warning to judges that they should be sceptical about their value.

There is great potential in strategic litigation for challenging the creeping use of artificial intelligence in law enforcement. This is one of the reasons why DFF has encouraged grant applications for cases that can help “ensure accountability, transparency and the adherence to human rights standards in the use and design of technology.” It is also one of the reasons we decided to host a Virtual Strategy Design Jam.

This jam was a pilot for DFF, in which we sought to explore the potential in using online communication and e-conferencing tools to bring litigators from across Europe together to co-work on litigation strategies. The jam was designed and delivered with the help of Allen Gunn from Aspiration, and our break-out rooms were expertly facilitated by Fieke Jansen and Fanny Hidvegi. It proved to be a useful way of workshopping, and one participant observed that it was a “very productive way of working internationally.”

During the jam, participants developed six blueprints for potential strategic cases. These blueprints explore a range of claimants, legal avenues, evidence gathering efforts and remedies that can help increase the chances of a legal victory that would limit the use of AI in predictive policing and judicial decision-making. A number of these strategies also seek to secure greater transparency and accountability in this increasingly inscrutable area of criminal justice by engaging with freedom of information litigation.

At the end of the session, a number of participants expressed an interest in taking some of this work forward with a view to seeing some of these strategies become real-life cases. We look forward to seeing how this develops and supporting their efforts.

Let us know if you think another area of digital rights could benefit from this kind of online strategising session. We would love to hear from you!

Future scenarios: visions about digital rights beyond the here and now

By Julia Kloiber, 26th November 2018

The right to get information about your own data, the right to delete the digital self, the right to participate in digital expression.

These are some of the rights that digital rights campaigners and experts brainstormed at the “Future-Proofing our Digital Rights” workshop. Future-oriented sessions like these are important, because they help experts think beyond their everyday and to come up with visions for a world they would like to see. In times of oppressive technology, apathetic politics and dystopian science fiction this can be a challenge.

Campaigning and advocacy work is often rather reactive. Whenever corporations or governments crack down on rights like free speech, privacy or participation, digital rights organisations are stepping in to defend those rights. Many of them do this work on shoestring budgets and with a lot of volunteer work. Scarce resources make it tricky to think beyond the here and now and to spend time and energy on speculating about future events. In order to be prepared for a time that is yet to come and to shape the agenda when it does come, it is important to develop visions.

At the “Future-proofing our digital rights” workshop campaigners, lawyers, and activist from all across Europe got the chance to leave the everyday aside for a moment and focus on hopes, fears and strategies for the future. I was part of a group that set out to develop future scenarios based on a brainstorming of future digital rights. Our goal was to make the abstract rights more tangible. We focused on the right to access social network infrastructure and the right to disconnect. In order to come up with narratives, we started out by defining target audiences for the individual rights. Our roughly defined groups included teenagers, people in urban areas, and marginalised communities.

Here are two examples of our work:

The right to disconnect and people in urban areas

We started out with the assumption that in the future, even more than today, we will be connected no matter where we are. Devices in and on our bodies will track us, our homes will be full of smart devices, in short: we will be online 24/7. No corner of the world will be disconnected – quite the opposite it will be hard to go offline.

After framing the scenario, we thought about the needs that people might have and about how we can support them. “Silence is a brain juice fertiliser” was one of the ideas that came up. Silence as something that enhances your abilities, something that is crucial in order to be creative and healthy. We discussed how it might become a privilege or a luxury to disconnect, and how we might have to come up with a concept similar to the concept of holiday from work. A right that allows individuals to disconnect for a certain amount of time. To be offline, untrackable and unreachable for others.

Another aspect of the discussion was to have the right to obfuscate locations in order to protect privacy. In our example, you don’t want insurance services to know that you are seeing a doctor – that’s why you obfuscate your location on the devices that track you.

The third idea focused around the home and the questions of how we can disconnect when smart devices like our toothbrush, our door lock and many others are constantly tracking us. Our approach was a rather simple one: we thought about a “kill switch” for homes that lets you shut down all the connections to the internet at once.

Communities and the right to access social infrastructure

Our scenario stated as follows: social media infrastructure is centralized and run by big corporations; their terms of use bring marginalised groups under threat of being blocked online – of having all their stories and content deleted from platforms and archives. This happens not due to illegal activities, but due to the fact that terms of use do not take local customs and cultural differences into account.

One approach our group came up with, was to say “we are the social network – we have the right to organize as long as what we do is legal. Private power should not impact this.” Another approach would be to have hyper local instances of social networks where info stays within a cultural context. A campaign idea that we developed was to tell the stories of prominent human rights advocates like the Suffragettes or Nelson Mandela, whose campaigns would have probably been blocked by social networks if they had lived in different times. Had that happened, we might have missed out on the rights and freedom these individuals brought us.

Backcasting

A next step for the scenario-exercise would be to use a methodology called backcasting. In backcasting, participants work backward from a future scenario to construct a plausible causal chain leading from here to there. This helps to come up with concrete steps on how to tackle an issue and to map stakeholders. For the “Right to disconnect“ this could look as follows:

2026 – The German government introduces a right to disconnect. Each individual has the right to be offline for at least 20 days per year.

2024 – Labour unions join the campaigns and start to lobby for the right to disconnect.

2022 – Campaigns promote being disconnected as the ultimate luxury of the 21st century. Being offline becomes a lifestyle choice.

2019 – Studies show that being disconnected from devices every once in a while increases the productivity and health of people.

It is important to reflect on concrete steps that have to be taken in order to pave the way for a preferred future scenario. This helps to identify stakeholders and measures that have to be taken on the way. What is crucial to remember is that there is no one solution or path to success. There are many different approaches that co-exist.

There is also not one single future: there are many possible futures. Not all of them are preferable, which is why we have to start to speculate and develop our own visions of the future in order to counter mainstream dystopian narratives. We need more of these exercises and workshops, because I believe that by exploring positive scenarios, we can increase the probability of more desirable futures happening.

“Futures Cone” (Voros, 2003)

About the author: Julia Kloiber develops strategies and concepts to innovate programs for the digital world. She is the Founder of Code for Germany, the Co-Founder of the Prototype Fund, and is a Fellow at the Mozilla Foundation since April 2018.

Aiming for the stars: litigating for a tech-positive future

By Nani Jansen Reventlow, 21st November 2018

Sometimes, it is difficult to keep a focus on the positive when working in human rights. We are so engaged in fighting negative practices, legislation, and policies that it is easy for us to forget about positive developments that lie on the other end of the spectrum.

In the digital rights sphere, much of our energy and attention goes to battling mass surveillance, criticising discrimination in algorithmic decision-making, pushing back against online content regulation, etc., etc. The list is long and seems to keep on growing. Much of the strategic litigation work on digital rights runs along the same lines: challenging restrictive laws, seeking redress for human rights violations in the digital sphere –– work in the courts is often focused on obtaining judgments that change or redress negative situations.

What if we took a step back and considered what the positives are that we could aim for? Do some blue sky thinking and set our goals based on what it is we want to see, framed as a positive goal to pursue? During our recent “Future-proofing our digital rights” workshop, we had a number of conversations to do just that. We brainstormed about our future rights, crafting a “Universal Declaration of Digital Rights” on post-it notes and asked the question what cases we could win in the short-term for a better digital rights future.

These conversations brought into focus some of the positive ways our lives could be impacted by technology in the future. A greener world, for example, with fewer traffic deaths as we would have self-driving electrical cars that would spare the environment. Or a world in which the developments in tech have led to us needing to do less work, meaning we would have more time to do other things, such as create art and spend time with loved ones. Or a world where knowledge and learning is accessible to anyone, anywhere. In human rights work too, technology does not always have to be something we are fighting against. It may be that we will call upon international, regional and domestic authorities to utilise technologies for a fairer or more just world.

What would our litigation strategies look like if we aimed for such objectives? What kind of cases could we bring – now and in the future – if we aimed for the stars?

Compelling questions and ones we only just started the conversation on during our workshop. We would love to hear from you: if you could pursue any positive scenario you wanted, what would it be? And how would you do it? Get in touch to let us know!