DFF and OXIL to dive into the future of digital rights at MozFest

By Stacie Hoffmann, 21st October 2019

Events over the past few years have highlighted the extent to which technology is becoming seamlessly integrated into our daily lives. The emergence of self-driving cars, automated systems, and everyday items supported by the Internet of Things (IoT) illustrate this and examples of the impact this can have when things go wrong range from the Cambridge Analytica fallout to a number of attacks on our internet infrastructure.

The data, networks, and systems that underpin these technological advances are already impacting our digital rights. But, in the future, what might these rights look like if, for example, algorithmic “black boxes” are the tools that govern the world around us? How do we protect our rights and those of the most vulnerable in our society?

While many of us are engaged in fighting today’s battles for digital rights – and preparing for tomorrow’s – the Digital Freedom Fund (DFF) and Oxford Information Labs (OXIL) are mapping issues that digital rights defenders might come up against in five to ten years’ time and the potential strategies that could be adopted now to protect our digital rights in the future.

In September 2018, DFF hosted a “Future-proofing our Digital Rights” workshop in Berlin. The workshop resulted in a number of insights that were summarised in a post-event blog series (you can read posts in the series here, here, here, here and here), alongside which DFF published a set of essays that examined specific future risks to our digital rights and what we might do now to prepare ourselves for them.

Ingrida Milkaite and Eva Lievens (Ghent University, Belgium) looked at the risks posed to children with the increasing collection of their personal data through smart devices that are specifically created for them – the “Internet of Toys.” They noted that we could be pushing authorities to issue clearer guidance on how data protection law could be used to safeguard children’s rights. Sheetal Kumar (Global Partners Digital) explored the expected rise in IoT devices – with a forecast of 30 billion connected devices by 2023 – and the specific vulnerabilities this will expose individuals to when it comes to government hacking and cyberattacks. She suggested that civil society could document government agency hacking and the legal frameworks used to justify these actions, and observed that global norms could be relied on to limit the risks posed by the rise of IoT.

Steve Song (Mozilla Fellow) discussed the potential of a “splinternet” resulting from not only government initiatives (i.e. China’s Great Firewall), but also companies’ ownership of the physical infrastructure underpinning the internet. Song noted that existing laws around competition, consumer rights, and data protection could be leveraged to secure a robust marketplace instead of a “splinternet” monopolised by large platforms. Stacie Hoffmann (OXIL) highlighted the number of evolving digital divides, affected not only by access to technologies but their policy environments, and noted that what gets measured gets done. This was a call for  meaningful data collection that can be used to support data-driven policy to prevent growing digital divides alongside the need to support digital skills capacity across society. Iris Lapinski, now the Chairwoman of Apps for Good, discussed the future scenario where artificial intelligence decides our rights. She considered three stages to managing the changes and challenges presented by this dystopian future.

A year since we held these conversations, a number of these future threats have increasingly become a reality. In order to help prepare ourselves for those challenges that are still on the horizon, we would like to continue the conversation about what digital rights threats we may be facing in five to ten years’ time and what steps we could be taking now to get ready to fight or protect ourselves against them.

DFF and OXIL are very excited to bring this conversation to MozFest in London this week. The Future-Proofing our Digital Rights session will be held on Sunday 27 October 2019, between 11:00am and 12:00pm, at Ravensbourne University (Room 804 – Level 8). Anyone interested in looking ahead at the opportunities, threats and challenges related to digital rights and discussing how we can prepare ourselves for the future is welcome to join. The workshop will be interactive, and all areas of expertise and interests are welcome. We hope to see you there.

For those who cannot make it to MozFest, we plan to share some of the issues and discussions that emerge during the session in a blog series over the coming months. If you would like to share your own views in a blog post in this series, please do get in touch!

About the author: Stacie Hoffmann is a cyber security and policy expert at Oxford Information Labs who works at the intersection of technology and policy. 

Testing a Framework for Setting GDPR Litigation Priorities

By Jonathan McCully, 18th October 2019

Data protection rights are engaged in nearly every aspect of our lives. From taking public transport, to registering to vote, exercising, and shopping. Under the recent General Data Protection Regulation (GDPR), there are significantly more opportunities for cases to be taken to vindicate data protection rights across these different domains.

However, with capacity and resources being limited, it can be difficult to know where to begin when it comes to enforcing this important piece of legislation. When trying to develop a strategy for GDPR litigation, a number of questions might arise: Where are the most pressing areas for GDPR work? What factors should we look at when setting litigation priorities? What metrics can we use, as a field, to pin-point shared litigation goals that we can collectively work towards?

At DFF’s meeting last month on unlocking the GDPR’s strategic litigation opportunities, participants discussed and critiqued a nascent framework that could help in answering some of these questions. The framework emerged from conversations DFF held leading up to the event with litigators seeking to leverage the GDPR in their digital rights litigation. It is intended to act as a tool to help establish shared GDPR litigation priorities.

Using a revised version of Maslow’s Hierarchy of Needs, the framework involves the application of a pyramid of “human needs” to the GDPR context to help establish what the most pressing issues might be when it comes to GDPR litigation. At the bottom of the pyramid are the higher priority needs of “social connection and social collaboration” (e.g. teamwork, the feeling of belonging, and family and community ties), “physiological needs” (e.g. access to food, water, shelter, and warmth) and “safety needs” (e.g. job security, adequate resources, health, and safe environments).

A Draft Framework for GDPR Litigation Priorities Based on Maslow’s Hierarchy of Needs

What do these needs have to do with the GDPR? Some of these fundamental needs can be served through the protection and promotion of data protection rights alongside other areas of human rights enforcement. For instance, some unlawful data processing practices may be hindering, deterring or even preventing individuals from accessing social security provisions and similar public services that serve a fundamental physiological need. While other unlawful data processing practices may contribute towards workplace surveillance, discrimination, and unfair hiring/dismissal practices that threaten job security, access to resources and the forging of social connections. Visualising where certain GDPR issues fall across this spectrum of human needs can be a useful way of establishing those issues that are of highest priority in our broader push for promoting and protecting the right to data protection.

During the meeting, participants reiterated that it would be useful to have a workable framework that could help establish shared goals and priorities for GDPR litigation. A number of participants observed that the draft framework was a useful way of linking GDPR work more concretely to other civil society and human rights movements. Others noted that it was an intuitive way of identifying harms or damage that might result from data protection violations and working from there, rather than a specific breach of a GDPR provision. They also noted that it was a useful exercise for taking a step back from specific GDPR non-compliance and thinking about what the real-world impact of our GDPR enforcement actions might be.

A number of constructive observations were also made about the framework and how it might be revised. For instance, using a hierarchy of needs in the human rights context gives rise to a number of concerns. Human rights should be protected equally and should not be viewed as a hierarchy of protections. Perhaps, therefore, the framework should be visualised as a circle of inter-connected human needs rather than a pyramid? The framework also fails to capture the systemic nature of some data protection harms, and how the widespread nature of some violations may render them more pressing and of higher priority. Finally, questions were raised about when and how it might be most useful to use a framework like this. Would it be most useful for mapping out organisational priorities for GDPR work? Or would it be better suited as a tool for identifying cases or communicating the impact of data protection issues to the general public?

At the event, a number of participants expressed an interest in discussing, critiquing and hacking this framework further to see if it can be developed into a practical tool for identifying and articulating strategic litigation goals. These conversations will be held in the coming weeks. We want to hear from others in the field who would like to join us in this exciting conversation, so please do get in touch if you are interested!

Transatlantic call series: machine learning and human rights

By Nani Jansen Reventlow, 15th October 2019

How can we minimise the negative impact of the use of algorithms and machine learning on our human rights? What cases are litigators in the US and Europe working on to take on this important issue?

Today’s fourth installment in DFF’s transatlantic call series addressed machine learning and human rights. EFF kicked off the conversation by telling participants about their work on the use of machine learning in various aspects of the criminal justice system. This includes the use of algorithms for determining risk of reoffending as well as determining the guilt of an alleged offender. EFF spoke about an ongoing case (California v. Johnson) in which they filed an amicus brief, arguing that criminal defendants should be able to scrutinise algorithms that have been used by prosecutors to secure a conviction.

A common thread in EFF’s work in this area is the need to ensure that government use of algorithms in decision-making processes is conducted in as fair and transparent a manner as possible. This is similar to the approach taken by PILP, who are challenging the use by government agencies in the Netherlands of “risk profiling” algorithms. These profiles are used to detect the likelihood of individuals committing fraud. The system, called “SyRI“, involves the pooling of citizens’ data that has been collected by the state in a variety of different contexts, after which algorithms calculate whether certain citizens pose a ‘risk’ of committing abuse, non-compliance or even fraud in the context of social security, tax payments, and labour law.

PILP shared how the SyRI system has a disproportionate impact on the poorest parts of the population. The UN Special Rapporteur on extreme poverty and human rights, Philip Alston, has submitted an amicus brief in the case, saying that the SyRI system poses “significant potential threats to human rights, in particular for the poorest in society”.

Following a further exchange on these cases and other work being done in this area, DFF also shared details of its ongoing projects on Artificial Intelligence and human rights. In November, DFF is organising a workshop together with the AI Now Institute to explore litigation opportunities to limit the negative impact on human rights posed by AI. Also, DFF’s Legal Adviser Jonathan McCully is working together with a technologist in the context of a Mozilla Fellowship to create two resources — one for lawyers and another for technologists, data scientists, and digital rights activists — that provide tools for working together effectively when taking cases against human rights violations caused by AI.

Our next transatlantic call will take place on 13 November and will focus on content moderation. It is not too late to join: get in touch to register your attendance!