How NGOs are Joining Forces Against Adtech

By Gro Mette Moen, 19th March 2020

Pole with sticker saying 'Big Data is Watching You'

As the General Data Protection Regulation approaches its two-year anniversary, it has been disappointing to see a lack of strong enforcement against adtech privacy violations.

While it seems that data protection authorities have found many violations too big to handle, NGOs have gathered forces, among other things at DFF’s strategy meeting in February.

In January 2020, the Norwegian Consumer Council published a report called “Out of Control – How consumers are exploited by the online advertising industry”, covering how apps on our phones share personal data with potentially hundreds of obscure companies. These companies harvest data to create profiles on individuals, which are then used for serving targeted advertising. Additionally, they are used for purposes such as discrimination, manipulation, and exploitation.

These companies harvest data to create profiles on individuals, which are then used for serving targeted advertising

Alongside the report, we filed complaints against six companies for breaching the GDPR, including five large adtech companies.

However, we were far from alone. We published ‘Out Of Control’ all across the world, together with a large number of other consumer, digital rights, and civil rights groups in Europe and the United States. The consumer rights umbrella in Europe, BEUC, coordinated the European action and many of the organisations are members of the Transatlantic Consumer Dialogue (TACD), Consumers International and the human rights umbrella Liberties. A total of 43 organisations in 20 countries participated, sending letters to several data protection authorities. Together we have achieved media coverage in more than 70 countries.

Our research was inspired by a vast digital rights network, including previous work done by Privacy InternationalPanoptykon Foundation and Open Rights Group. Additionally, many organisations have given important input in the research and we worked together with the cybersecurity company Mnemonic, the digital rights organization None Of Your Business (NOYB) and the researcher Wolfie Christl of Cracked Labs in particular.

Based on discussions at events such as the DFF strategy meeting in February, it is clear that many of the issues reach beyond privacy violations. In the future, we need to push for enforcement, looking across silos to address the combination of competition, privacy and consumer rights violations.

…it is clear that many of the issues reach beyond privacy violations. In the future, we need to push for enforcement

Another important topic is how we can address the responsibilities of publishers and advertisers. Publishers, advertisers and consumers/citizens are all cheated by the adtech industry in different ways. This means that we might have common interests.

Karolina Iwańska from the Panoptykon Foundation describes the dilemmas facing legitimate media in the article “10 reasons why advertising is broken”. Since a large portion of advertising funds go to adtech intermediaries, and because the adtech industry is dominated by only a few monopolistic actors, legitimate media are locked into a less-than fruitful business relationship with companies such as Google.

There are also widespread issues with ad fraud, where advertisers are paying large sums of money to adtech companies, but end up having their ads shown to bots instead of human beings.

We have seen some advertisers address the problematic aspects of adtech, both in dialogue with NGOs and in industry forums. For example, the head of the Norwegian branch of the World Federation of Advertisers recently published an opinion piece in the industry magazine Kampanje, predicting that advertisers will demand higher privacy standards from the adtech industry in the future. He calls for advertisers to become “the change you wish to see” – using their purchasing power to create room for alternative business models to grow. In other words – these issues can be addressed both through enforcement and from industry pressure.

The adtech industry should look out, because when the NGO sector stands together, we are a force to be reckoned with

In the meantime, we hope that further collaborative work will help push relevant authorities to protect consumers from commercial surveillance. By extension, this may help bring about a better and safer digital world, a world that is no longer driven by all-consuming tracking and profiling.

The adtech industry should look out, because when the NGO sector stands together, we are a force to be reckoned with.

Gro Mette Moen is the acting director of digital policy in the Norwegian Consumer Council.

Photo by ev on Unsplash

Turning Words Into Action: What Happens After the Strategy Meeting?

By Nani Jansen Reventlow, 12th March 2020

Coloured sticky notes from the strategy meeting

This February, we hosted our third annual strategy meeting, bringing together a group of 60 actors from across Europe and beyond to discuss recent developments in digital rights, strategise on new initiatives, and also reflect on how we can better work together as a field.

Besides offering an opportunity for participants to interact in person with their peers, the strategy meeting sparks new action and informs DFF’s activities. How? Let’s take a look.

Focus Areas for Grants and Support

DFF’s thematic focus areas were determined based on the priorities expressed by the field. Following a strategy process that commenced before DFF’s formal founding, the first annual strategy meeting was an opportunity to collectively take stock of the current state of play for digital rights in Europe and what the priorities were for key actors in the field.

The mappings and conversations at the meeting were then distilled by the DFF team into three thematic focus areas for its work: privacy and data protection; the free flow of information online; and transparency, accountability and adherence to human rights standards in the design and use of technology.

The mappings and conversations at the meeting were then distilled by the DFF team into three thematic focus areas

Whether these thematic focus areas continue to reflect the field’s needs and priorities is tested on a continuous basis, through individual exchanges, litigation meetings, and developments in DFF’s grantmaking work. But the annual strategy meeting remains a key moment to test and verify that we are still providing support to the field with the right focus areas in mind.

Skill Building and Litigation Sessions

In 2018, we organised two strategic litigation retreats together with SHARE Foundation to help litigators sharpen their strategic litigation skills and develop case ideas into a concrete strategic litigation plan.

During the strategy meeting that year, participants identified a need for more skill-building and -sharing around strategic litigation: people wanted to exchange with peers on specific casework and learn from each other.

Following the strategy meeting, a number of follow-up calls and exchanges with members of our network helped determine in more detail what would be most useful, on the basis of which the programme for the retreat was created. The two retreats in 2018 were positively received, with requests for new installments, which DFF plans to organise in the fall of 2020.

A number of cases also sprung from the retreats, some of which DFF was given the opportunity to support.

Strategy meetings have also led to litigation meetings focused on specific thematic work

Strategy meetings have also led to litigation meetings focused on specific thematic work. The GDPR meeting organised by Access Now and noyb in May of 2019, sponsored by DFF, is an example, as is the training on using the competition law framework to advance digital rights, held in December 2019. DFF’s current work on developing a litigation strategy on the “digital welfare state” follows from the consultation UN Special Rapporteur Philip Alston held for his thematic report on the issue. We hosted this consultation as a side event to the 2019 strategy meeting.

The need to focus on litigation on the negative human rights impact of artificial intelligence and automated decision-making also followed from that strategy meeting and, in addition to a litigation meeting co-organised with AI Now Institute in November 2019, had concrete follow-up during the recent one.

Resources for Strategic Litigators

Several resources have been developed based on conversations at the strategy meetings. The Model Ethical Funding Policy followed from discussions on the problematic aspects of funding digital rights work. A Short Guide to Competition Law for Digital Rights Litigators was created following the explicit request made in the report-out following a small group discussion for more training and resources on using the competition law framework to advance digital rights.

We continue to listen to discussions during our strategy meetings to get a sense of what resources or tools we can help build to better support the field.

Strategic Cases

Last, but definitely not least, the strategy meetings have been an opportunity for litigators to connect and explore ways to collaborate on cases, either by sparking new ideas for future work or finding allies for existing projects.

…the strategy meetings have been an opportunity for litigators to connect and explore ways to collaborate on cases

One piece of cross-border litigation sparked from the 2018 strategy meeting, when representatives from Gesellschaft für Freiheitsrechte in Germany and epicenter.works in Austria had a conversation about the EU Passenger Name Record Directive. That conversation led to joint action to challenge the Directive on data protection grounds, which you can read more about here.

The NGOs leading the challenge to the SyRI risk assessment system in the Netherlands, PILP and Platform Bescherming Burgerrechten, found an ally for their case in 2019, when they invited UNSR Philip Alston to submit an amicus brief. With success: the case was recently won.

Another example is organisations working on challenging the AdTech industry, who updated the field on their ongoing work and held space for multiple conversations strategising on the topic at the 2019 strategy meeting, which then fed back into the litigation work they were developing.

What’s Next?

The February strategy meeting ventured into previously unexplored territory on a number of fronts, amongst others by looking at what we could learn from the work in other regions, such as Latin America, and exploring parallels in tactics with climate change litigation.

However, the conversations at the meeting also showed that, overall, the thematic priorities of the field remain focused in the same areas as before. This makes sense: these are not minor, short-term issues, but ones that require a sustained and concerted effort to tackle.

The thematic conversations on, amongst others, AI and human rights and the digital welfare state will feed into the dedicated convenings that will take place on these topics in the spring and summer. Further activities will surely follow from that.

And what about cases? We are certain new collaborations have emerged and fresh ideas were brought to existing projects. We look forward to highlighting them here in the months to come!

The Gender Divide in Digital Rights

By Nani Jansen Reventlow, 3rd March 2020

March 8th marked International Women’s Day, a time when countries around the world seize the opportunity to celebrate womanhood, and to condemn the ongoing struggles facing women globally.

It’s an opportune moment to celebrate the great achievements made by women in the digital rights field. We are fortunate as a field to have a relatively good gender balance, as well as many formidable woman leaders. International Women’s Day (but actually, any day) is an occasion to  mark and acknowledge their fantastic work, as well as the impressive and inspiring work of women academics, social scientists, technologists, lawyers and activists who support and build on the work of the field.

Over the last few decades, societies across the globe have made great strides towards gender equality and this has coincided with a huge leap in technological advancement. However, we would be wrong to assume that the digital age has brought an end to bias and discrimination against women.

…we would be wrong to assume that the digital age has brought an end to bias and discrimination

A quick survey of the power structures at play across key digital rights issues – from the gender-biased nature of certain technologies, to the deepening “digital divide” – throws the relevance of women’s struggle for gender equality into sharp relief. 

Is Technology Sexist? 

The fact that evolving technologies, such as AI and other automated systems, are sexist has almost become a truism in the digital rights field. Technology inevitably reflects the cultural biases of its creators and the societal context in which it is built, meaning that gender discrimination, conscious or otherwise, is routinely transposed to these digital systems. Sexism is subsequently reproduced, embedded, and even exacerbated by new technologies.

Technology inevitably reflects the cultural biases of its creators and the societal context in which it is built

Real life examples of this abound, from Amazon being forced to scrap an AI recruiting tool that favoured men, to an Apple credit card system being accused of offering women lower credit limits.

Then there’s facial recognition, a burgeoning challenge to human rights and privacy the world over. Software used to identify individuals by matching a biometric map of their faces with those on a database.

This technology has been proven to work with less precision on women and people of colour, resulting in a higher proportion of “false positives.” In other words, if you are a woman or a person of colour you are more likely to be incorrectly matched to someone in a database by facial recognition technology than if you were a white man. This is of particular concern when such technology is used to inform decisions made by law enforcement and other state authorities.

…if you are a woman or a person of colour you are more likely to be incorrectly matched to someone in a database by facial recognition technology

As well as the higher risk of misidentifying non-male, non-white individuals, the fact that facial recognition is increasingly used to monitor protests means that those exercising their right to free assembly have a greater likelihood of being targeted by this intrusive technology – including those fighting for social issues such as women’s rights. The deployment of such technology may deter individuals from attending and expressing themselves freely at such protests.

The Digital Divide

The “digital divide” was once used predominantly to describe the discrepancy between those who have access to the internet and those who do not. But it can also be understood as the widening chasm between the small minority who wield the potential of ever-more powerful technology, and the everyday person subject to its influence. Big Tech monopolises global markets, while states continue to grow their access to potentially dangerous technological tools. 

The gender aspect of this “digital divide” should not be a surprise to anyone: Silicon Valley has a notorious “brogrammer” problem. To put this into perspective, consider that in 2018 Google’s workforce was only 30 per cent women (the company fared even worse when it came to other diversity figures), or that in 2016 six of the ten biggest tech companies did not have a single female executive.

The so-called digital welfare state, which refers to the growing use of automated decision-making in social services, also spells trouble for gender equality. In her book “Automating Inequality”, Virginia Eubanks argues that the digital welfare state leaves society’s most vulnerable as particular targets of AI and algorithms that are inherently prejudiced or in breach human rights. Given that, in many countries, women in general, as well as doubly marginalised groups such as single mothers, are significantly more likely to experience poverty, the digitisation of welfare affects them disproportionately. 

Intersectionality

Of course, most of these issues cannot and should not be isolated as gender-specific. The overlap and piling up of prejudice across multiple social categories is an irrefutable feature of social inequality in today’s world.

Is AI sexist? My answer was yes, absolutely. But what about it being racist, ableist, and hetero-centric?

Recently, I spoke at an event in Berlin that asked: is AI sexist? My answer was yes, absolutely. But what about it being racist, ableist, and hetero-centric? We need to broaden the question when we discuss bias in technology. And that goes for the multitude of other equality problems in the field, too. 

At DFF, we recently decided to begin a process of decolonisation in the digital rights field. Through this, we hope to go some way towards reimagining troubling assumptions and prompting bigger, structural change.

Of course, by homing in on the challenges, we may sometimes overlook causes for optimism. Indeed, it should be clear to anybody working in the digital rights field that we are surrounded by trailblazing female role models. It’s also heartening that many of us are so ready and willing to work on bringing about change.

But this International Women’s Day remains an apt moment for reflection on the fact that systemic problems still do exist – and it’s the responsibility of all of us to continue tackling them. 

Photo by Michelle Ding on Unsplash