Scaling digital rights work in Europe and further strengthening the field

By Nani Jansen Reventlow, 27th February 2019

Last week, experts, activists and litigators from 48 organisations working on digital rights across Europe gathered in Berlin for DFF’s second annual strategy meeting. The gathering built on the important work done in 2018 – set in motion by a meeting organised by Vera Franz of OSF in 2016 – by increasing not only the geographical, but also the thematic diversity amongst participants. Organisations from outside North-West Europe were better represented than previously, and both more specialised organisations – working on issues ranging from children’s right to prisoners’ rights – as well as more “traditional” human rights NGOs took part.

Participants seized the rare opportunity of having such a breadth and high level of expertise in one room to jointly think about how we can strengthen digital rights at field level and also do deep work on the crucial issues that are on the digital rights agenda today.

The meeting, held at Studio Chérie in Neukölln under the energetic facilitation of Gunner from Aspiration, kicked off with a session in which we took stock of the field by surveying key developments over the past year. Great work was highlighted in smaller group conversations, including on the first defamation case for hyperlinking at the European Court of Human Rights, police use of facial recognition technology, the Telegram case in Russia, and experiences with filing the first GDPR complaints

Having gotten a good sense of what everyone was working on, we moved straight into the question of how we can sustain and scale digital rights efforts. How can we tell better stories about our work to different audiences and how do we run strong campaigns? What are the synergies we can create between advocacy and litigation? And how can we make sure our work is funded in an ethical way?

These conversations also took a deep dive into how we can build a fierce and resilient digital rights field, looking both at the role technologists play in the fight for digital rights and the need to decolonise the digital rights field, which currently lacks diversity in many respects.

In addition to the field level questions, a lot of deep thematic work was conducted in the course of the two days. AI, the GDPR and especially AdTech, net neutrality and copyright were clearly in focus.

The strategy meeting was followed by an additional day during with the UN Special Rapporteur on extreme poverty and human rights held a consultation for his forthcoming thematic report on the digital welfare state and its implications for the human rights of poor and vulnerable individuals. Close to 30 organisations provided input on issues related to, amongst others, the use of digital technologies in welfare systems across Europe, efforts to address any negative impact, as well as the obstacles to doing this effectively. 

The meeting left us feeling even more grateful for working with such an impressive network of organisations and individuals. While the challenges at times may seem endless, seeing this level of dedication and commitment to the greater good is inspiring. We look forward to continue learning from all of them and providing our support where we can.

Over the coming weeks, we will publish some further reflections on the outcomes of the meeting, including from the participants themselves, and address some of the plans we have to follow up on the conversations that took place.

Taking stock of 2018, looking forward to what lies ahead

By Nani Jansen Reventlow, 18th February 2019

As we are preparing to engage in conversation with old and new friends at our annual strategy meeting this week, it is a good time to take stock of the developments we have seen since our first meeting in 2018 and look ahead at what is still to come.

One of the most prominent developments in 2018 was of course the entry into force of the GDPR, which led to some early wins as well as promising initiatives pushing for greater protection of our personal data. We have seen the first claims being brought against major online patforms over the collection of data subject consent, initiatives being developed to take on the AdTech ecosystem, and the first GDPR fines being imposed by data protection authorities against corporations such as Google. However, much of the potential of the Regulation remains under-explored, not least in the context of the much-discussed realm of AI, in particular looking at the GDPR’s restrictions on decisions based solely on automated processing,such as profiling. We look forward to supporting various initiatives to enforce the rights provided under the GDPR through cases that can set precendent that will help clarify its scope of protection. 

Looking at the important issue of surveillance, the ECtHR judgment in the case of Big Brother Watch and Others v. United Kingdom (also known as the “10 NGOs” case) resulted in a partial, but not entirely comprehensive, win. The Court found that aspects of the UK bulk surveillance regime, including the authorities’ access to data held by communications service providers, was incompatible with Articles 8 and 10 of the European Convention on Human Rights, which protect the rights to privacy and freedom of expression. Nevertheless, the parties pursuing the case are hoping that the European Court’s Grand Chamber — which recently accepted the case for review — will reconsider other aspects of the case, such as the finding that, in principle, bulk surveillance was compatible with the Convention and that the intelligence sharing regime did not violate Articles 8 and 10.

The European Court of Human Rights also handed down, in Magyar Jeti v. Hungary, its first judgment on liability for hyperlinking in defamation cases. This is a strong precedent that will help towards safeguarding the free flow of information online by protecting website publishers from absolute liability in contexts where they merely hyperlink to defamatory content on another site.

Looking ahead at 2019, we expect a great deal of activity around AI, algorithmic decision-making, machine learning and any of the other headers under which the “next frontier” of digital rights is currently being discussed. We should be prepared for some of the battles to be fought in what perhaps aren’t conventionally considered to be digital rights fields, such as labour law, health care, and the like. The Houston Teachers case in the US, which set an important precedent around the due process concerns raised by the use of algorithms in the employment context, is an important example in that regard. One of the issues we hope to explore this week during the strategy meeting is therefore how we can create better connections between the “digital” rights field, human rights field, and others who will likely share the frontlines with us in fighting for our rights in the digital sphere.

At DFF, we are looking forward to facilitating further dialogue across the full human rights spectrum when it comes to the digital sphere; supporting diversification (or decolonisation — more on that is to follow) and general strengthening of the field by, among other things, developing  strategic litigation toolkits, exploring greater connectivity between the field and academia, and of course supporting amazing cases and pre-litigation research.

As always, your views and input are invaluable to us, so even if you are not able to join us in Berlin this week, please get in touch to share your ideas, comments and suggestions with us. We look forward to hearing from you as we continue our support to those working to advance digital rights in Europe.

Future-proofing our digital rights and making sure everyone is included

By Rejo Zenger, 11th February 2019

When fighting for the protection of civil rights in the digital domain, it is easy to get trapped in a reactive modus operandi. Digital rights organisations are underfunded and overburdened. At any given time, there is a surplus of legislative proposals delivering yet another attack on freedom of speech or interfering with the right to a private life. And while online information and communication technologies/services increasingly function as intermediaries for all that we do in life, human rights are rarely at the heart of their policies.

Being forced into a reactive mode comes with consequences: it is rare that civil society is able to set its own agenda in advance. As a result, when legislative proposals lack vision and are fragmentary, so is civil society’s response. To break free from this toxic trap, brainstorming sessions such as DFF’s workshop “Future-Proofing our Digital Rights” are of vital strategic importance. It allows digital rights activists and experts to step away from their daily realities and start envisioning the future they want and what it will take to get there.

Having a reactive approach also means that many of the organisations that are focused on general privacy or free speech concerns (such as Bits of Freedom) are unable to pay sufficient attention to issues from the perspective of vulnerable or marginalised groups in society. Their voices are suppressed in an environment that is not interested in their needs. When digital rights organisations are caught up in fighting for the privacy or free speech rights of all internet users, they can often miss or overlook the perspectives of these particular groups. This becomes even more painful when you consider that these citizens are the first to experience the negative consequences of legislation that does not respect digital rights.

One of the sessions at DFF’s workshop asked participants to imagine a more diverse and inclusive future and how technology plays a role in this future. A complex topic with a multitude of nuances, all participants agreed. What does such a future look like? How do we prevent the creation of an environment where people feel free to harass and abuse others, causing them to stay away? How does that digital environment differ from the public space in the real world? What design allows for all groups of people, regardless of their position in society, to be included? Participants not only found it difficult to envision such a future in light of current trends, but also found it hard to identify a working strategy. Should we focus on mitigating the risk of people being targeted, should we help them empower themselves, or both? And how do we educate those who are unaware about the issues faced by minorities and vulnerable groups?

A necessary and obvious, yet easy to ignore, approach to the envisioned future is to collaborate with activist groups working on topics such as sexism and racism, as well as those that work on feminist causes. Only through speaking with, instead of about, at-risk communities will digital rights activists be able to really understand how the architecture of our digital environment, company policies and legislative proposals affect the rights of those that are impacted most frequently and most severely. And only through these collaborations will we be able to adequately demonstrate the urgency of the issues that need to be addressed.

Another outcome of the brainstorm: we need to rethink how, as a society, we deal with hateful speech online. On the one hand, participants felt we need to fight the so-called filter bubbles, in which users are presented only with information that aligns with their existing views and prevents them from being confronted with contrasting ones. On the other hand, some recognized the need to empower users to make their own decisions. Right now, important decisions on what one is allowed to publish and what one will see are in the hands of corporations whose incentives hardly line up with those of their users – and especially not those in marginalized and vulnerable groups.

About the author: Rejo Zenger works as a policy advisor at the Dutch digital civil rights organisation Bits of Freedom. He focusses on the power of companies such as Google and Amazon and their impact on citizens’ freedoms.