Tackling AI in the Time of COVID

By Jonathan McCully, 9th October 2020

How can we protect our digital rights amidst tech-solutionist approaches to combat the COVID-19 pandemic?

30 participants in DFF’s “AI in times of COVID” workshop examined this question from different angles over the past 3 days, joining for an online meeting from around the world.

The workshop built on the “Litigating Algorithms in Europe” workshop DFF and the AI Now Institute organised in November 2019. One of the desired follow-ups from that gathering was an international meeting on combatting the impact of AI on human rights through strategic litigation.

As we all know, the world has changed quite a bit since, which changed not only the format of the meeting, but also the scope.

As we wrote earlier this year, the COVID-19 pandemic also created a crisis for digital rights. However, the transborder nature of the pandemic also created an opportunity: if we are seeing similar measures –– COVID apps, health trackers and other tech solutions –– being rolled out around the world, how can we make sure we push for standards that protect our human rights? Can we leverage positive results in one jurisdiction in another? And: what best practices and tactics from the pre-COVID world can we draw upon to successfully fight these battles?

…what best practices and tactics from the pre-COVID world can we draw upon to successfully fight these battles?

Trends in Tech-solutionism

We started the first day of the workshop by listening and learning from participants across the globe who have been monitoring the “tech-solutionist” measures that have been rolled out during the pandemic both in and outside of Europe. They explained the issues they had been spotting and which litigation actions they were considering taking in response.

Reflecting on these sessions at the outset of day 2, participants noted that “tech inevitability” was a common thread across different jurisdictions and asked the question how we could counter the narrative of governments framing technology as the main way to solve the health crisis.

Pushing for Transparency
 

One of the key issues in challenging automated systems being used is knowing that they are being used in the first place. There often is a lack of transparency of what type of algorithmic decision-making is being deployed, as well as when and how.

One way to push for greater transparency is by using freedom of information requests. After hearing experiences from different participants in using this tool, we co-created a checklist of what litigators could be asking for.

One way to push for greater transparency is by using freedom of information requests.

After that (as well as a joint coffee break to top up the levels of caffeine), participants looked at what could be learned from successful cases in a non-COVID context. Lawyers who had worked on the Dutch SyRI case, the challenge to police use of facial recognition in the UK, and lawyers who had taken on a Medicaid algorithm that reduced care to disabled patients and the COMPAS risk assessment system in the US shared their stories, tactics, and lessons learned.

Best Practice Brainstorming

Following a round of participant-proposed best practice brainstorming sessions on day 3, which looked at best practices for communicating the impact of COVID technology to sceptical judges, strategies to safeguard against repurposing COVID tech after the pandemic, how to solve tech inequalities exacerbated by the pandemic, and tackling machine and human bias in automated systems, we revisited the potential cases we started the workshop with on day 1 to see how the strategies and tactics discussed between that first and this last session could be leveraged.

…we looked at best practices for communicating the impact of COVID technology to sceptical judges, strategies to safeguard against repurposing COVID tech after the pandemic, how to solve tech inequalities exacerbated by the pandemic, and tackling machine and human bias in automated systems

It is unfortunate that the group were not able to meet in person during these exceptionally challenging times, but it was truly uplifting and inspiring to virtually connect, discuss and brainstorm with an amazing group working hard to safeguard our digital rights during the current health crisis. There was plenty of food for thought, and ideas shared for further collaboration and information sharing, and we look forward to seeing this invaluable work progress.

Graphic Recordings by Blanche Illustrates and TEMJAM

Join Our Virtual Litigation Retreat!

By Jonathan McCully, 24th September 2020

We are very excited to be bringing back DFF’s litigation retreat in an online format between 12 and 18 November 2020 (excluding the weekend), and we welcome applications from litigators across Europe working on digital rights issues to join us.

In 2018, we ran litigation retreats in Montenegro and Belgrade. You can watch a video of our first retreat here. These were great opportunities for digital rights litigators to spend time together, sharpen their litigation skills, and share knowledge, experiences and tactics that others could use in their own litigation.

During these retreats, a variety of cases were workshopped from challenges to police use of facial recognition technology to litigation against Facebook’s censorship of user content. 

At our retreats, we aim to create a co-learning environment where participants can concentrate their time, energy and attention to further developing and working through challenges in relation to an ongoing piece of digital rights litigation.

Through creating this space, and holding focused discussions, we hope that participants can come away from these retreats with an enhanced litigation strategy and plan for their cases. As a participant from a previous retreat said: “it is a unique opportunity to stop and reflect on what can be done better when you’re trying to achieve change using strategic litigation.”

…“it is a unique opportunity to stop and reflect on what can be done better when you’re trying to achieve change using strategic litigation”

We have remodeled our previous retreats in order to hold this one virtually. It will include strategic litigation training and workshopping components in a collaborative online environment. The four-day programme will comprise sessions on building skills for developing a litigation strategy, case planning, and advocacy, and will include dedicated sessions on thematic areas relevant to litigating digital rights cases in Europe.

Built into this agenda will be dedicated group work, where participants will work on strategic litigation plans for case studies that they have brought to the virtual retreat. We will also have some time to get to know each other with some informal online social activities.

The four-day programme will comprise sessions on building skills for developing a litigation strategy, case planning, and advocacy, and will include dedicated sessions on thematic areas

As with past retreats, the aim of the four days is to give participants time to do deep work on their case studies. Therefore, even though the online sessions across the four days will be short and intensive, we encourage those attending to use the rest of the time across these four days to disconnect from other ongoing projects and think through aspects of their cases in light of conversations they will have with the group. Therefore, we would recommend that all those who attend treat the four days as a real retreat, away from other work pressures and external realities. 

If you are interested in joining us, please get in touch with us and we will send you a short application form to fill out. The deadline for applications is 16 October 2020.

The main thing we ask for is that all participants come to the retreat with a digital rights case study – whether existing or hypothetical – that they would be interested in litigating. Ideally, the case study would fall within DFF’s thematic focus areas, which you can read more about here, but we are open to other strategic case ideas as well. 

We hope you will be able to join us!

The First Steps to Decolonise Digital Rights

By Sarah Chander, 24th September 2020

In early 2020, DFF and its project partner EDRi started their joint work of initiating a decolonising process for the digital rights field in Europe. How does this fit into the current landscape of digital rights and recent developments in the movement for racial and social justice? And what have we been up to these past months? 

The Social Dilemma and other pop-culture portrayals have brought the tech industry into even sharper focus. One major concern is that this industry is a tiny, unrepresentative yet powerful, minority holding the power to impact the everyday experiences of almost the entire world. We should ask ourselves whether this is true for many of the communities working to contest Big Tech’s dominance.

We are in a moment of global shifts. Whilst racial oppression and global inequality are by no means novel phenomena, the wave of global #BlackLivesMatter protests this summer have many of us reflecting on social and racial justice with respect to our organisations, our work, and even ourselves.

Zooming in on the digital rights field, a reckoning of the need for change has also been bubbling for a long time. Following some initial discussions, DFF and European Digital Rights (EDRi) teamed up to explore how we might go about developing a process for decolonising the digital rights field. This blog is an update on our work at these early stages.

Why “decolonise” the field?

This process was conceived through various observations of the digital rights field in Europe. Over the years, we have seen a lack of representation of all people we seek to protect from harms. This undoubtedly impacts our work – specifically the extent to which the field is equipped to meaningfully defend the rights of all.

Over the years, we have seen a lack of representation of all people we seek to protect from harms. This undoubtedly impacts our work

This overwhelming lack of representation in our movement matters. The (near) absence of those who are racialised, gendered, queer and trans, working class, differently-abled, and hailing from the global south, affects our work and our perspectives more than we know. It is a flaw, a weak spot. It compromises our ability to make good on a commitment to uphold the digital rights of all in society. This absence shows up in a number of ways.

One way this unfolds is an assumption of universality with respect to the holder of digital rights – the ‘user’ of technology. Who do we envisage when we discuss defending the rights of the ‘user’? Generally, we don’t envisage anyone in particular – neutral to life circumstances and personal characteristics.

However, often when we assume universality, objectivity, and neutrality, what we do is embed dominance; we focus on the experiences that are most represented and similar to what we know. We centre our work on the assumption – in the words of Dr Ruha Benjamin – that there is a “universal user who is not racialized, is not marked by gender, sex, or class.”

This universality is a myth. Instead, taking a variety of specific lenses will illuminate how technology can in effect deepen and exploit a range of social, economic, and political vulnerabilities.

Missed opportunities

The issue of representation also undoubtedly limits our perspectives and our potential, too. In particular, the limited engagement of the European digital rights field with counterparts in the global south means we miss out on the necessary learning we need to understand the vast array of digital harms that are ongoing, their global impact, context, and their place in our collective histories.

As we contest the extractive nature of surveillance capitalism, we would gain much from placing our fight in a much longer trajectory of colonialism, with data relations a new commodity of our time.

More practically, this problem shows itself in the fruits of our labour. Even our most pivotal tools do not have answers to the most serious issues facing marginalised communities.

So far, data protection and privacy rights and the GDPR have been of limited use in protecting against group-based threats and the potential for discriminatory algorithmic profiling, for example those who may be overpoliced as the result of predictive policing tools.

So, the mechanism works for the individual who is informed and in a position to make their individual rights actionable, but less so for others, who ‘data protection’ was not modelled for. Just as we speak about harmful technologies as a result of skewed design, this argument applies to our legal tools too.

Just as we speak about harmful technologies as a result of skewed design, this argument applies to our legal tools too

These examples show us that the need for change goes beyond the need to simply adjust the composition of people in the room. It’s about ensuring that all of our stories, realities and the ‘multiple kinds of expertise’ we bring are reflected and valued in the field – its tools, its work, its approach. There is a growing, intuitive knowledge that a change is overdue for the digital rights field.

First steps

How to go about something that sounds so huge? So far, we have approached cautiously questions around a decolonising process for the digital rights field. What does it mean? How may we achieve it? Who else do we need to be talking to?

Taking baby steps, we started by speaking to organisations, collectives, activists, and others currently outside the digital rights field to understand how they engage with digital rights issues. How do organisations working on workers’ rights, LGBT, anti-racist, or disability rights see digital rights and the field itself? Do they see the links with their work? How do they understand the concept of decolonisation? What processes of change have they seen work?

So far, the project has been met with encouraging levels of enthusiasm. Over 30 individuals and organisations have taken the time to discuss these questions with us. What we already see is that there is huge appetite and potential from activists and civil society working outside of the digital rights field to engage, to learn more, and to establish connections with their work and “digital rights”.

What we already see is that there is huge appetite and potential from activists and civil society working outside of the digital rights field to engage

We’ve discussed and questioned many things – from the colonialism of the human rights framework, the relation of this to justice, and the limits of the “diversity and inclusion” approach. This thinking will feed into our further work.

Now we are starting conversations with the digital rights field. We hope to get a picture of the different visions, sensitivities, and understandings within the field. What is the impact of representation on our work? How may we address this? At what stages of the process are different actors in the field?

The next step is to bring interested stakeholders together in an (online) gathering to get insight into how we may go about designing a decolonising process for the digital rights field. We are excited – this will be the first time those who have interacted with the project will come together, and we hope to benefit from the range of different perspectives, build on what we have already learned and develop concrete next steps for the design process.

The next step is to bring interested stakeholders together in an (online) gathering to get insight into how we may go about designing a decolonising process

We know that this project itself cannot dismantle the world’s uneven power relations – but we hope to do what we can from our corner.

Sarah Chander is Senior Policy Advisor at European Digital Rights (EDRi). She leads EDRi’s policy work on artificial intelligence and connecting digital rights with wider movements for equality and justice.