The Gender Divide in Digital Rights

By Nani Jansen Reventlow, 3rd March 2020

women protesters holding handmade signs, among them, "We Will Not Be Silenced"

March 8th marked International Women’s Day, a time when countries around the world seize the opportunity to celebrate womanhood, and to condemn the ongoing struggles facing women globally.

It’s an opportune moment to celebrate the great achievements made by women in the digital rights field. We are fortunate as a field to have a relatively good gender balance, as well as many formidable woman leaders. International Women’s Day (but actually, any day) is an occasion to  mark and acknowledge their fantastic work, as well as the impressive and inspiring work of women academics, social scientists, technologists, lawyers and activists who support and build on the work of the field.

Over the last few decades, societies across the globe have made great strides towards gender equality and this has coincided with a huge leap in technological advancement. However, we would be wrong to assume that the digital age has brought an end to bias and discrimination against women.

…we would be wrong to assume that the digital age has brought an end to bias and discrimination

A quick survey of the power structures at play across key digital rights issues – from the gender-biased nature of certain technologies, to the deepening “digital divide” – throws the relevance of women’s struggle for gender equality into sharp relief. 

Is Technology Sexist? 

The fact that evolving technologies, such as AI and other automated systems, are sexist has almost become a truism in the digital rights field. Technology inevitably reflects the cultural biases of its creators and the societal context in which it is built, meaning that gender discrimination, conscious or otherwise, is routinely transposed to these digital systems. Sexism is subsequently reproduced, embedded, and even exacerbated by new technologies.

Technology inevitably reflects the cultural biases of its creators and the societal context in which it is built

Real life examples of this abound, from Amazon being forced to scrap an AI recruiting tool that favoured men, to an Apple credit card system being accused of offering women lower credit limits.

Then there’s facial recognition, a burgeoning challenge to human rights and privacy the world over. Software used to identify individuals by matching a biometric map of their faces with those on a database.

This technology has been proven to work with less precision on women and people of colour, resulting in a higher proportion of “false positives.” In other words, if you are a woman or a person of colour you are more likely to be incorrectly matched to someone in a database by facial recognition technology than if you were a white man. This is of particular concern when such technology is used to inform decisions made by law enforcement and other state authorities.

…if you are a woman or a person of colour you are more likely to be incorrectly matched to someone in a database by facial recognition technology

As well as the higher risk of misidentifying non-male, non-white individuals, the fact that facial recognition is increasingly used to monitor protests means that those exercising their right to free assembly have a greater likelihood of being targeted by this intrusive technology – including those fighting for social issues such as women’s rights. The deployment of such technology may deter individuals from attending and expressing themselves freely at such protests.

The Digital Divide

The “digital divide” was once used predominantly to describe the discrepancy between those who have access to the internet and those who do not. But it can also be understood as the widening chasm between the small minority who wield the potential of ever-more powerful technology, and the everyday person subject to its influence. Big Tech monopolises global markets, while states continue to grow their access to potentially dangerous technological tools. 

The gender aspect of this “digital divide” should not be a surprise to anyone: Silicon Valley has a notorious “brogrammer” problem. To put this into perspective, consider that in 2018 Google’s workforce was only 30 per cent women (the company fared even worse when it came to other diversity figures), or that in 2016 six of the ten biggest tech companies did not have a single female executive.

The so-called digital welfare state, which refers to the growing use of automated decision-making in social services, also spells trouble for gender equality. In her book “Automating Inequality”, Virginia Eubanks argues that the digital welfare state leaves society’s most vulnerable as particular targets of AI and algorithms that are inherently prejudiced or in breach human rights. Given that, in many countries, women in general, as well as doubly marginalised groups such as single mothers, are significantly more likely to experience poverty, the digitisation of welfare affects them disproportionately. 

Intersectionality

Of course, most of these issues cannot and should not be isolated as gender-specific. The overlap and piling up of prejudice across multiple social categories is an irrefutable feature of social inequality in today’s world.

Is AI sexist? My answer was yes, absolutely. But what about it being racist, ableist, and hetero-centric?

Recently, I spoke at an event in Berlin that asked: is AI sexist? My answer was yes, absolutely. But what about it being racist, ableist, and hetero-centric? We need to broaden the question when we discuss bias in technology. And that goes for the multitude of other equality problems in the field, too. 

At DFF, we recently decided to begin a process of decolonisation in the digital rights field. Through this, we hope to go some way towards reimagining troubling assumptions and prompting bigger, structural change.

Of course, by homing in on the challenges, we may sometimes overlook causes for optimism. Indeed, it should be clear to anybody working in the digital rights field that we are surrounded by trailblazing female role models. It’s also heartening that many of us are so ready and willing to work on bringing about change.

But this International Women’s Day remains an apt moment for reflection on the fact that systemic problems still do exist – and it’s the responsibility of all of us to continue tackling them. 

Photo by Michelle Ding on Unsplash

Take #3: Building a Global, Inclusive Digital Rights Movement

By Nani Jansen Reventlow, 25th February 2020

Last week, DFF’s annual strategy meeting came back with a bang. Our third meeting was our biggest to date, and we were fortunate enough to be joined by members old and new from around the world. Attendees hailed from Argentina, the UK, Estonia, Serbia, Ireland, Bulgaria, Hungary, the US, the Netherlands, South Africa and beyond.

In three days of working sessions and consultations, we dove right to the heart of digital rights: from ongoing questions around AI and algorithms to emerging conversations, such as the field’s parallels with the climate struggle and labour rights.

Despite the gravity of challenges facing human rights in the current era, the experience of coming together to brainstorm and discuss ways forward was an inspiring one. By the time we’d wrapped up, the walls of our lovely venue in Village Berlin were plastered floor to ceiling in rainbow sticky notes that promised future collaboration on projects.

Continuing Conversations

In digital rights, some conversations crop up over and over. We discussed at length the rise of facial recognition technology use by states, honing in on cases stretching from Europe to China to Latin America. We discussed the smart-video surveillance system currently being rolled out in Belgrade, Serbia, while also hearing details about the evolving situation in the UK, where facial recognition has been permanently deployed by police in some regions.

The subject of algorithms and algorithmic decision-making were also omnipresent: we heard, for example, about a case being fought in Spain to demand transparency in the algorithms being used by public authorities. Then, in a rich debate about filters, blocking and private censorship, we discussed potential solutions, ranging from reforming the AdTech business model, to platforms requiring consent from users for filters.

On Friday, we hosted a focused consultation session on AI and human rights, and how we can effectively work on litigation in this area. We asked questions including: what value can be added by technologists in this kind of litigation? Where are the knowledge gaps when it comes to AI/machine learning and the law? 

Emerging Perspectives

There were fresh and new perspectives shared as well. Climate change was a hot topic: with the environmental crisis at tipping point, the junction at which climate issues and digital rights meet is hard to avoid. One conversation looked at how digital rights activists can borrow strategies from the climate change struggle, while another focused on the intersection between the two fields – including the targeted surveillance of climate activists, and the monitoring of energy consumption through smart meters.

The digital welfare state also proved itself an inescapable, and rapidly escalating, issue. We looked at the exponential digitisation of social protection provision, and asked ourselves what tools or strategies we can adopt to challenge the technology that monitors, profiles and punishes one of society’s most vulnerable groups: welfare applicants. On Friday we hosted a fruitful in-depth consultation on developing a litigation strategy to tackle this rising problem. We tried to conceptualise and define the issue, while also mapping stakeholders already working in the area.

Zooming Out

As well as tackling digital rights’ challenges old and new, we took time to zoom out and consider the broader power structures at play. In light of DFF’s recent decision to focus on decolonising the field of digital rights, we discussed concrete steps for making that a reality – and, crucially, why it matters. Ideas for effecting change ranged from changing the way we write job specs when hiring new candidates to ensuring that we create space for discussions around decolonisation in the workplace.

Labour rights were also high on the agenda, and we explored the issue of collective bargaining, a particularly pertinent issue in the gig economy. We also sought to address the working conditions of content moderators, who often work in extremely challenging circumstances.

Against the backdrop of these profoundly difficult human rights challenges, one topic resonated deeply in the room: burn-out. It’s no secret that work in the field can be mentally and emotionally taxing, and it was refreshing to see the prioritising of individual well-being and mental health.

Safe to say, we were left feeling inspired and galvanised. At DFF, we’ll be striving to harness this momentum: we’ll be organising follow-up focus meetings and running a blog series featuring new ideas shared at the meeting. The invaluable knowledge gained will inform and lead our work going forward – so watch this space.

Looking Back, Looking Forward: 2020 and Beyond

By Nani Jansen Reventlow, 13th February 2020

photo of DFF office with collection of "Digital rights are human rights" tote bags hanging on wall

As we prepare for our third annual strategy meeting, we look back at some highlights from the past year – and think about what’s in store for 2020.

*

GDPR Successes Trickle Through, and a Mixed Bag for Facial Recognition

2019 saw several promising outcomes for GDPR enforcement across Europe.

One such win occurred in Spain. When implementing the GDPR, the country had included a legal provision that allowed the profiling of individuals for electoral purposes. The so-called “public interest” exception allowed political parties to collect online data about citizens, which could then be used to contact them privately, including through SMS or WhatsApp. In May, however, the Spanish Constitutional Court overturned the exception, in what activists have deemed an important step forward for data protection law in Europe.

Another hopeful development transpired in Sweden, where the GDPR provided a powerful framework for the national data protection board to oppose the use of facial recognition technology in schools. The technology was already being rolled out in one school in order to track children’s attendance. As this was found to violate several GDPR articles, the plan was scrapped and the municipality fined.

The issue of facial recognition reared its head again in a US and UK context, with mixed results

The issue of facial recognition reared its head again in a US and UK context, with mixed results. In the US, moratoriums on its use were imposed in several areas, including in San Francisco, Oakland and Somerville. Some people even cited fears about the prospect of their cities turning into “police states”.

In the UK, the outcome was more chequered. Liberty lost a case challenging police use of facial recognition tools – despite the court acknowledging that they do, indeed, breach people’s privacy. The group, which is calling for a ban on the technology, is now appealing the decision.

Surging Algorithms and the Right to be Forgotten

Algorithms are an exponential challenge to human rights protection – something we saw evidence of in the filing of a case concerning university applications in France. The national students’ union challenged universities’ use of algorithms to sort prospective students’ applications. The union demanded publication of the “local algorithms” that were being used to determine applicants’ success, in a case that reflects rising concerns about how algorithms are used – and a growing demand for transparency around them.

Meanwhile, the Court of Justice of the European Union ruled in a landmark case on the issue of search engine de-indexing, popularly known as “the right to be forgotten”. In a number of judgements in recent years, the court ruled on the right of individuals to have their personal data removed from search engines. In September, the CJEU made a call about the international nature of such rights, finding that while “EU law does not currently require de-referencing to be carried on all versions of the search engine, it also does not prohibit such a practice”.

What’s Next? The “Digital Welfare State”, More Algorithms, and Battling AdTech

The digital welfare ftate has quickly become a hot topic in digital rights. We’ve already had one landmark ruling in 2020 in the Dutch SyRI case. SyRI is a system designed by the Dutch government to process large amounts of data about the public, and subsequently to identify those most likely to commit benefit fraud. A court in the Hague recently concluded the system’s use was a violation of people’s privacy, marking an important step towards protecting some of society’s most vulnerable groups.

The digital welfare state has quickly become a hot topic in digital rights

We’re also expecting more activity in the area of algorithms. Activists in the UK have already criticised the immigration authorities’ use of algorithmic decision-making in visa applications, for example. At DFF, we’ll be further building on last year’s meeting on this topic, and working on organising an international meeting in the coming months.

Another critical issue we expect to see taking off this year is the fight against the adtech industry. Many are already campaigning against the misuse of people’s personal data for online advertising. It’s a growing business that is, at the same time, facing a dramatic rise in opposition, including legal challenges.

Problems surrounding upload filters and other automated content moderation measures are also set to rise in prominence. The news that Ofcom will regulate online harms in the UK recently hit headlines, while many are concerned that surveillance for the purposes of national security is increasingly at odds with people’s privacy rights.

Many core issues will naturally spill over from previous years and continue to be fought in 2020 – whether it’s mass surveillance, net neutrality, or big tech dominance. It’s bound to be a busy year, but at DFF we look forward to keeping that momentum rolling.

We are excited to be having conversations about all these issues and more at our strategy meeting next week. For those who cannot join us in Berlin in person: keep an eye out for updates on our blog and Twitter!