Transatlantic call series: machine learning and human rights

By Nani Jansen Reventlow, 15th October 2019

How can we minimise the negative impact of the use of algorithms and machine learning on our human rights? What cases are litigators in the US and Europe working on to take on this important issue?

Today’s fourth installment in DFF’s transatlantic call series addressed machine learning and human rights. EFF kicked off the conversation by telling participants about their work on the use of machine learning in various aspects of the criminal justice system. This includes the use of algorithms for determining risk of reoffending as well as determining the guilt of an alleged offender. EFF spoke about an ongoing case (California v. Johnson) in which they filed an amicus brief, arguing that criminal defendants should be able to scrutinise algorithms that have been used by prosecutors to secure a conviction.

A common thread in EFF’s work in this area is the need to ensure that government use of algorithms in decision-making processes is conducted in as fair and transparent a manner as possible. This is similar to the approach taken by PILP, who are challenging the use by government agencies in the Netherlands of “risk profiling” algorithms. These profiles are used to detect the likelihood of individuals committing fraud. The system, called “SyRI“, involves the pooling of citizens’ data that has been collected by the state in a variety of different contexts, after which algorithms calculate whether certain citizens pose a ‘risk’ of committing abuse, non-compliance or even fraud in the context of social security, tax payments, and labour law.

PILP shared how the SyRI system has a disproportionate impact on the poorest parts of the population. The UN Special Rapporteur on extreme poverty and human rights, Philip Alston, has submitted an amicus brief in the case, saying that the SyRI system poses “significant potential threats to human rights, in particular for the poorest in society”.

Following a further exchange on these cases and other work being done in this area, DFF also shared details of its ongoing projects on Artificial Intelligence and human rights. In November, DFF is organising a workshop together with the AI Now Institute to explore litigation opportunities to limit the negative impact on human rights posed by AI. Also, DFF’s Legal Adviser Jonathan McCully is working together with a technologist in the context of a Mozilla Fellowship to create two resources — one for lawyers and another for technologists, data scientists, and digital rights activists — that provide tools for working together effectively when taking cases against human rights violations caused by AI.

Our next transatlantic call will take place on 13 November and will focus on content moderation. It is not too late to join: get in touch to register your attendance!

Unlocking the strategic litigation opportunities of the GDPR

By Nani Jansen Reventlow, 27th September 2019

In May 2019, Access Now and noyb hosted a three-day meeting in Vienna to share knowledge and experiences on GDPR enforcement. The event covered several practical elements to take into account when litigating under the GDPR and considered the different avenues for taking GDPR complaints.

This week, DFF hosted a follow-up meeting in Berlin, which offered both an opportunity to look at the issue of GDPR litigation at field level and identify concrete options for GDPR enforcement. 21 participants from across Europe, including litigators, academics and activists, discussed issues ranging from how to address the functioning of DPAs, to using the GDPR to address the use of biometric data by private companies.

Following a mapping of current efforts of GDPR enforcement, the meeting took a critical look at a nascent framework prepared by DFF to prioritise litigation goals under the GDPR. Then, participants mapped priority litigation goals across a range of issues such as misuse of data by political parties, unlawful data processing by public services, and challenging “big tech” business models.

On the second day, participants formulated concrete case ideas distilled from the priority litigation goals and shared experiences of lessons learned in their GDPR work so far. A number of potential areas for follow-up were identified, which DFF is looking forward to supporting.

The meeting was energetically facilitated by Aspiration and hosted by the fantastic team at WE’RE ALL IN. Over the coming days, we will be publishing a number of guest posts to further highlight some of the conversations, so stay tuned for further updates.

If you are working on GDPR-related issues, have ideas for next steps or generally want to get involved, please get in touch!

Rebuilding the master’s house instead of repairing the cracks: why “diversity and inclusion” in the digital rights field is not enough

By Nani Jansen Reventlow, 2nd September 2019

Paul Sableman, CC BY 2.0

Silicon Valley is not the only sector with a “white guy” problem: civil society struggles with this as well. Oddly, it wasn’t until I looked at the group photo taken at the Digital Freedom Fund’s first strategy meeting that I noticed it: everyone in the photo except for me was white. I had just founded a new organisation supporting strategic litigation on digital rights in Europe and this had been our first field-wide strategic meeting, bringing together 32 key organisations working on this issue in the region. This was in 2018. In 2019, the number of participants had increased to 48, but the picture in the group photo still was pretty pale, with the team of my organisation accounting for 50% of the 4 exceptions to that colour palet. And while gender representation overall seemed fairly balanced, and there was a diverse range of nationalities present, some voices were noticeably absent from the room. For example, the overall impression of participants was that there was no one with a physical disability attending.* It was clear: something needed to change.

In all fairness, the participants themselves had clocked this as well –– the issue of decolonising the digital rights field had significant traction in the conversations taking place in the course of those two days in February. I have been trying to find good statistics on what is popularly referred to as “diversity and inclusion” (and sometimes as “diversity, equity and inclusion”; I have fallen into that trap myself in the past when speaking about technology’s ability to amplify society’s power structures), both in the human rights field more widely and the digital rights field specifically, but failed. Perhaps I was not looking in the right places; if so, please point me in the right direction. The situation is such, however, that one hardly needs statistics to conclude that something is seriously amiss in digital rights land. A look around just about any digital rights meeting in Europe will clearly demonstrate the dominance of white privilege, as does a scroll through the staff sections of digital rights organisations’ webpages. Admittedly, this is hardly a scientific method, but sometimes we need to call it as we see it. 

This is an image many of us are used to, and have internalised to such an extent that I, too, as a person who does not fit that picture, took some time to wake up to it. But it clearly does not reflect the composition of our societies. What this leaves us with, is a watchdog that inevitably will have too many blind spots to properly serve its function for all the communities it is supposed to look out for. To change that, focusing on “diversity and inclusion” is not enough. Rather than working on (token) representation, we need an intersectional approach that is ready to meet the challenges and threats to human rights in an increasingly digitising society. Challenges and threats that often disproportionately affect groups that are marginalised. Marginalisation is not a state of being, it is something that is done to others by those in power. Therefore, we need to change the field, its systems and its power structures. In other words: we need a decolonising process for the field and its power structures rather than a solution focused on “including” those with disabilities, from minority or indigenous groups, and the LGBTQI+ community in the existing ecosystem.

How do we do this? I don’t know. And I probably will never have a definitive answer to that question. What I do know, is that the solution will not likely come from the digital rights field alone. It is perhaps trite to refer to Audre Lorde’s statement on how “the master’s tools will never dismantle the master’s house” in this context, but if the current field had the answers and the willingness to deploy them, the field would look very different. Lorde’s words also have a lot to offer as a perspective on what we might gain from a decolonising process as opposed to “diversity and inclusion”. While the following quote focuses on the shortcomings of white feminism, it is a useful aide in helping us imagine what strengths a decolonised digital rights field might represent:    

“Advocating the mere tolerance of difference between women is the grossest reformism. It is a total denial of the creative function of difference in our lives. Difference must be not merely tolerated, but seen as a fund of necessary polarities between which our creativity can spark like a dialectic. … Only within that interdependency of different strengths, acknowledged and equal, can the power to seek new ways of being in the world generate, as well as the courage and sustenance to act where there are no charters.”

The task of re-imagining and then rebuilding a new house for the digital rights field is clearly enormous. As digital rights are human rights and permeate all aspects of society, the field does not exist in isolation. Therefore, its issues cannot be solved in isolation either –– there are many moving parts, many of which will be beyond our reach as an organisation to tackle alone (and not just because DFF’s current geographical remit is Europe). But we need to start somewhere, and we need to get the process started with urgency. If we begin working within our sphere of influence and encourage others to do the same in other spaces, to join or to complement efforts, together we might just get very far.

My hope is that, in this process, we can learn from and build on the knowledge of others who have gone before us. Calls to decolonise the academic curriculum in the United Kingdom are becoming increasingly louder, but are being met with resistance. Are there examples of settings in which a decolonising process has been successfully completed? In South Africa, the need to move away from the “able-bodied, hetero-normative, white” standard in the public interest legal services sector is referred to as “transformation“. And efforts to “radically re-imagine and re-design the internet” from Whose Knowledge center the knowledge of marginalised communities on the internet, looking at not only online resources such as Wikipedia, but also digital infrastructure, privacy, surveillance and security. What are the lessons we can learn from those efforts and processes?

This is an open invitation to join us on this journey. Be our critical friend: share your views, critiques and ideas with us. What are successful examples of decolonising processes in other fields that the digital rights field could draw on? What does a decolonised digital rights field look like and what can it achieve? Who will be crucial allies in having this succeed? How can we ensure that those currently being marginalised lead in this effort? Share your views, help us think about this better, so we might start working on a solution that can catalyse structural change.

* As observation was the method used for this determination, it is difficult to comment on representation that is less visible than other categories such as religion, socioeconomic background, sexual orientation, etc.