Take #3: Building a Global, Inclusive Digital Rights Movement

By Nani Jansen Reventlow, 25th February 2020

Last week, DFF’s annual strategy meeting came back with a bang. Our third meeting was our biggest to date, and we were fortunate enough to be joined by members old and new from around the world. Attendees hailed from Argentina, the UK, Estonia, Serbia, Ireland, Bulgaria, Hungary, the US, the Netherlands, South Africa and beyond.

In three days of working sessions and consultations, we dove right to the heart of digital rights: from ongoing questions around AI and algorithms to emerging conversations, such as the field’s parallels with the climate struggle and labour rights.

Despite the gravity of challenges facing human rights in the current era, the experience of coming together to brainstorm and discuss ways forward was an inspiring one. By the time we’d wrapped up, the walls of our lovely venue in Village Berlin were plastered floor to ceiling in rainbow sticky notes that promised future collaboration on projects.

Continuing Conversations

In digital rights, some conversations crop up over and over. We discussed at length the rise of facial recognition technology use by states, honing in on cases stretching from Europe to China to Latin America. We discussed the smart-video surveillance system currently being rolled out in Belgrade, Serbia, while also hearing details about the evolving situation in the UK, where facial recognition has been permanently deployed by police in some regions.

The subject of algorithms and algorithmic decision-making were also omnipresent: we heard, for example, about a case being fought in Spain to demand transparency in the algorithms being used by public authorities. Then, in a rich debate about filters, blocking and private censorship, we discussed potential solutions, ranging from reforming the AdTech business model, to platforms requiring consent from users for filters.

On Friday, we hosted a focused consultation session on AI and human rights, and how we can effectively work on litigation in this area. We asked questions including: what value can be added by technologists in this kind of litigation? Where are the knowledge gaps when it comes to AI/machine learning and the law? 

Emerging Perspectives

There were fresh and new perspectives shared as well. Climate change was a hot topic: with the environmental crisis at tipping point, the junction at which climate issues and digital rights meet is hard to avoid. One conversation looked at how digital rights activists can borrow strategies from the climate change struggle, while another focused on the intersection between the two fields – including the targeted surveillance of climate activists, and the monitoring of energy consumption through smart meters.

The digital welfare state also proved itself an inescapable, and rapidly escalating, issue. We looked at the exponential digitisation of social protection provision, and asked ourselves what tools or strategies we can adopt to challenge the technology that monitors, profiles and punishes one of society’s most vulnerable groups: welfare applicants. On Friday we hosted a fruitful in-depth consultation on developing a litigation strategy to tackle this rising problem. We tried to conceptualise and define the issue, while also mapping stakeholders already working in the area.

Zooming Out

As well as tackling digital rights’ challenges old and new, we took time to zoom out and consider the broader power structures at play. In light of DFF’s recent decision to focus on decolonising the field of digital rights, we discussed concrete steps for making that a reality – and, crucially, why it matters. Ideas for effecting change ranged from changing the way we write job specs when hiring new candidates to ensuring that we create space for discussions around decolonisation in the workplace.

Labour rights were also high on the agenda, and we explored the issue of collective bargaining, a particularly pertinent issue in the gig economy. We also sought to address the working conditions of content moderators, who often work in extremely challenging circumstances.

Against the backdrop of these profoundly difficult human rights challenges, one topic resonated deeply in the room: burn-out. It’s no secret that work in the field can be mentally and emotionally taxing, and it was refreshing to see the prioritising of individual well-being and mental health.

Safe to say, we were left feeling inspired and galvanised. At DFF, we’ll be striving to harness this momentum: we’ll be organising follow-up focus meetings and running a blog series featuring new ideas shared at the meeting. The invaluable knowledge gained will inform and lead our work going forward – so watch this space.

Fighting for Algorithmic Transparency in Spain

By David Cabo, 17th February 2020

Laptop in dark with lines of code on screen

Civio is an independent, non-profit journalism organisation based in Madrid. We monitor public authorities, report to citizens, and promote true and effective transparency within public institutions.

After years of fighting for access to information and reporting to a general audience through our journalism, we have recently reached a new point of consensus. We have realized that requesting, accessing and explaining data to a general audience is no longer enough in order to oversee the public realm.

The use of secret algorithms by Spanish public institutions made us take a step further. One example is BOSCO, a software created by the Spanish Ministry for Green Energy Transition to decide who is entitled to the so-called Bono Social de Electricidad – a discount on energy bills to at-risk citizens.

At first, Civio teamed up with the National Commission on Markets and Competition (CNMC) to create an app in order to ease the process of applying for this discount, since the complexity of the process and lack of information were preventing disadvantaged groups from applying.

Fighting for Transparency

After dozens of calls from wrongly dismissed applicants, our team requested information about BOSCO and its source code. In the documents shared with us, Civio found out that the software was turning down eligible applications. Unfortunately, we got no reply from the administration regarding BOSCO’s code.

The government, and the Council of Transparency and Good Governance, subsequently denied Civio access to the code by arguing that sharing it would result in a copyright violation. However, according to the Spanish Transparency Law and intellectual property laws, work carried out by public administrations are not entitled to any copyright protections.

“Being ruled through a secret source code or algorithm should never be allowed in a democratic country under the rule of law”

Civio believes it is necessary to make this kind of code public for a simple reason – as our lawyer and trustee Javier de la Cueva puts it, “being ruled through a secret source code or algorithm should never be allowed in a democratic country under the rule of law”.

Software systems like BOSCO behave, in practice, like laws, and therefore we believe they should be public. Only then can civil society have the tools to effectively monitor decisions taken by our governments. This is the reason why we have appealed the refusal of the Transparency Council in court.

This is our first incursion into algorithmic transparency, and we are fully aware that it is going to take a long time. We believe BOSCO is a good case to start with because it is not as complex as other machine learning ‘black box’ systems, whose biases may be very difficult to unpick and identify. This first battle will unveil the limits to the Spanish Access to Information Law and will allow us to prepare for more complex cases.

In order to get ready for the future, it is very useful for us to know about cases others have fought in the past, as well as in other contexts. While we were aware of ProPublica’s investigation on biased prediction-software in US courts, we also discovered many other instances at the Digital Freedom Fund and AI Now workshop in Berlin last year. This made it very clear to us that the problem of algorithmic accountability is vast in scope. It was particularly valuable to learn about cases in Europe, where the legal framework is closer to Spain with regard to privacy and data protection.

Fighting Obscure Algorithms

Given the relevance of the issue for the well-being of civil society, our goal is to start developing the technical, organisational and legal skills needed to assess and control the increasing use of automated systems in public administrations.

In the words of Maciej Cegłowski: “machine learning is like money laundering for bias. It’s a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don’t lie.”

…it is through algorithms that many decisions concerning the public are being taken

Algorithms, therefore, can be used to obscure dramatic changes in policy adopted by administrations or the political class, including for example public policies that implement dramatic cuts in public access to welfare services. So even while we continue to fight for transparency and our right to information, we should not ignore that it is through algorithms that many decisions concerning the public are being taken.

David Cabo is Co-Director of the Civio Foundation (Fundación Ciudadana Civio).

Image by  Markus Spiske on Unsplash

Looking Back, Looking Forward: 2020 and Beyond

By Nani Jansen Reventlow, 13th February 2020

photo of DFF office with collection of "Digital rights are human rights" tote bags hanging on wall

As we prepare for our third annual strategy meeting, we look back at some highlights from the past year – and think about what’s in store for 2020.

*

GDPR Successes Trickle Through, and a Mixed Bag for Facial Recognition

2019 saw several promising outcomes for GDPR enforcement across Europe.

One such win occurred in Spain. When implementing the GDPR, the country had included a legal provision that allowed the profiling of individuals for electoral purposes. The so-called “public interest” exception allowed political parties to collect online data about citizens, which could then be used to contact them privately, including through SMS or WhatsApp. In May, however, the Spanish Constitutional Court overturned the exception, in what activists have deemed an important step forward for data protection law in Europe.

Another hopeful development transpired in Sweden, where the GDPR provided a powerful framework for the national data protection board to oppose the use of facial recognition technology in schools. The technology was already being rolled out in one school in order to track children’s attendance. As this was found to violate several GDPR articles, the plan was scrapped and the municipality fined.

The issue of facial recognition reared its head again in a US and UK context, with mixed results

The issue of facial recognition reared its head again in a US and UK context, with mixed results. In the US, moratoriums on its use were imposed in several areas, including in San Francisco, Oakland and Somerville. Some people even cited fears about the prospect of their cities turning into “police states”.

In the UK, the outcome was more chequered. Liberty lost a case challenging police use of facial recognition tools – despite the court acknowledging that they do, indeed, breach people’s privacy. The group, which is calling for a ban on the technology, is now appealing the decision.

Surging Algorithms and the Right to be Forgotten

Algorithms are an exponential challenge to human rights protection – something we saw evidence of in the filing of a case concerning university applications in France. The national students’ union challenged universities’ use of algorithms to sort prospective students’ applications. The union demanded publication of the “local algorithms” that were being used to determine applicants’ success, in a case that reflects rising concerns about how algorithms are used – and a growing demand for transparency around them.

Meanwhile, the Court of Justice of the European Union ruled in a landmark case on the issue of search engine de-indexing, popularly known as “the right to be forgotten”. In a number of judgements in recent years, the court ruled on the right of individuals to have their personal data removed from search engines. In September, the CJEU made a call about the international nature of such rights, finding that while “EU law does not currently require de-referencing to be carried on all versions of the search engine, it also does not prohibit such a practice”.

What’s Next? The “Digital Welfare State”, More Algorithms, and Battling AdTech

The digital welfare ftate has quickly become a hot topic in digital rights. We’ve already had one landmark ruling in 2020 in the Dutch SyRI case. SyRI is a system designed by the Dutch government to process large amounts of data about the public, and subsequently to identify those most likely to commit benefit fraud. A court in the Hague recently concluded the system’s use was a violation of people’s privacy, marking an important step towards protecting some of society’s most vulnerable groups.

The digital welfare state has quickly become a hot topic in digital rights

We’re also expecting more activity in the area of algorithms. Activists in the UK have already criticised the immigration authorities’ use of algorithmic decision-making in visa applications, for example. At DFF, we’ll be further building on last year’s meeting on this topic, and working on organising an international meeting in the coming months.

Another critical issue we expect to see taking off this year is the fight against the adtech industry. Many are already campaigning against the misuse of people’s personal data for online advertising. It’s a growing business that is, at the same time, facing a dramatic rise in opposition, including legal challenges.

Problems surrounding upload filters and other automated content moderation measures are also set to rise in prominence. The news that Ofcom will regulate online harms in the UK recently hit headlines, while many are concerned that surveillance for the purposes of national security is increasingly at odds with people’s privacy rights.

Many core issues will naturally spill over from previous years and continue to be fought in 2020 – whether it’s mass surveillance, net neutrality, or big tech dominance. It’s bound to be a busy year, but at DFF we look forward to keeping that momentum rolling.

We are excited to be having conversations about all these issues and more at our strategy meeting next week. For those who cannot join us in Berlin in person: keep an eye out for updates on our blog and Twitter!