Articulating litigation goals for challenging the collection of biometric data

By Alan Dahi, 24th October 2019

There were many productive sessions at the two-day meeting in Berlin on unlocking the strategic litigation opportunities of the GDPR, hosted by DFF in September 2019. One of these was on articulating litigation goals to challenge the collection of biometric data.

The GDPR defines biometric data as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data” [emphasis added]. The GDPR sees biometric data as a special category of personal data, which enjoys particularly strict safeguards when it comes to its use.

At first glance, it is perhaps surprising that only such personal data “resulting from” certain technical processing is covered by the definition and not the underlying personal data itself. However, this approach can be understood against the background that photographs would otherwise qualify as biometric data. The special safeguards put in place by the GDPR against the unlawful use of “special categories of personal data”, such as biometric data, would undermine the societally established – and typically accepted – widespread use of photographs.

While acknowledging that the GDPR permits the processing of biometric data, the session in September considered which uses of biometric data should be acceptable from a societal or moral perspective, independently from the current legislative reality. The opinions ranged from its use never being acceptable (at least until the societal ramifications are better understood and placed on a solid legal basis) to a more differentiated approach depending on the type of biometric data, the intended purpose, and the involved parties.

Ultimately, and in light of the current legislative framework that permits the limited use of biometric data, the group focused on evaluating the potential privacy harms the use of different types of biometric data may have across various scenarios.

The session came to the conclusion that biometric data based on facial recognition deserves a particular focus. This is because faces are, typically, readily visible; their collection for biometric purposes is very easy. Indeed, the collection of facial features can typically be done without the affected individual’s knowledge or awareness. Moreover, there are few practical ways for an individual to prevent such collection. This would be in contrast to the more physically intrusive collection of fingerprints, which generally requires the co-operation of the individual, and against which an individual would be in a better position to protect themselves or challenge the data processing.

Considering different scenarios surrounding the use of biometric data, the overall consensus was that the forced provision/collection of biometric data is generally unacceptable, particularly with regard to access to government and private services, as well as when it comes to the biometric data of children.

The session, which consisted of participants from five different organisations, ended with a clearer understanding of how to articulate litigation goals to challenge the collection of biometric data and with a practical road map on how to put these goals into action.

About the author: Alan Dahi is a data protection lawyer at noyb.

DFF and OXIL to dive into the future of digital rights at MozFest

By Stacie Hoffmann, 21st October 2019

Events over the past few years have highlighted the extent to which technology is becoming seamlessly integrated into our daily lives. The emergence of self-driving cars, automated systems, and everyday items supported by the Internet of Things (IoT) illustrate this and examples of the impact this can have when things go wrong range from the Cambridge Analytica fallout to a number of attacks on our internet infrastructure.

The data, networks, and systems that underpin these technological advances are already impacting our digital rights. But, in the future, what might these rights look like if, for example, algorithmic “black boxes” are the tools that govern the world around us? How do we protect our rights and those of the most vulnerable in our society?

While many of us are engaged in fighting today’s battles for digital rights – and preparing for tomorrow’s – the Digital Freedom Fund (DFF) and Oxford Information Labs (OXIL) are mapping issues that digital rights defenders might come up against in five to ten years’ time and the potential strategies that could be adopted now to protect our digital rights in the future.

In September 2018, DFF hosted a “Future-proofing our Digital Rights” workshop in Berlin. The workshop resulted in a number of insights that were summarised in a post-event blog series (you can read posts in the series here, here, here, here and here), alongside which DFF published a set of essays that examined specific future risks to our digital rights and what we might do now to prepare ourselves for them.

Ingrida Milkaite and Eva Lievens (Ghent University, Belgium) looked at the risks posed to children with the increasing collection of their personal data through smart devices that are specifically created for them – the “Internet of Toys.” They noted that we could be pushing authorities to issue clearer guidance on how data protection law could be used to safeguard children’s rights. Sheetal Kumar (Global Partners Digital) explored the expected rise in IoT devices – with a forecast of 30 billion connected devices by 2023 – and the specific vulnerabilities this will expose individuals to when it comes to government hacking and cyberattacks. She suggested that civil society could document government agency hacking and the legal frameworks used to justify these actions, and observed that global norms could be relied on to limit the risks posed by the rise of IoT.

Steve Song (Mozilla Fellow) discussed the potential of a “splinternet” resulting from not only government initiatives (i.e. China’s Great Firewall), but also companies’ ownership of the physical infrastructure underpinning the internet. Song noted that existing laws around competition, consumer rights, and data protection could be leveraged to secure a robust marketplace instead of a “splinternet” monopolised by large platforms. Stacie Hoffmann (OXIL) highlighted the number of evolving digital divides, affected not only by access to technologies but their policy environments, and noted that what gets measured gets done. This was a call for  meaningful data collection that can be used to support data-driven policy to prevent growing digital divides alongside the need to support digital skills capacity across society. Iris Lapinski, now the Chairwoman of Apps for Good, discussed the future scenario where artificial intelligence decides our rights. She considered three stages to managing the changes and challenges presented by this dystopian future.

A year since we held these conversations, a number of these future threats have increasingly become a reality. In order to help prepare ourselves for those challenges that are still on the horizon, we would like to continue the conversation about what digital rights threats we may be facing in five to ten years’ time and what steps we could be taking now to get ready to fight or protect ourselves against them.

DFF and OXIL are very excited to bring this conversation to MozFest in London this week. The Future-Proofing our Digital Rights session will be held on Sunday 27 October 2019, between 11:00am and 12:00pm, at Ravensbourne University (Room 804 – Level 8). Anyone interested in looking ahead at the opportunities, threats and challenges related to digital rights and discussing how we can prepare ourselves for the future is welcome to join. The workshop will be interactive, and all areas of expertise and interests are welcome. We hope to see you there.

For those who cannot make it to MozFest, we plan to share some of the issues and discussions that emerge during the session in a blog series over the coming months. If you would like to share your own views in a blog post in this series, please do get in touch!

About the author: Stacie Hoffmann is a cyber security and policy expert at Oxford Information Labs who works at the intersection of technology and policy. 

Testing a Framework for Setting GDPR Litigation Priorities

By Jonathan McCully, 18th October 2019

Data protection rights are engaged in nearly every aspect of our lives. From taking public transport, to registering to vote, exercising, and shopping. Under the recent General Data Protection Regulation (GDPR), there are significantly more opportunities for cases to be taken to vindicate data protection rights across these different domains.

However, with capacity and resources being limited, it can be difficult to know where to begin when it comes to enforcing this important piece of legislation. When trying to develop a strategy for GDPR litigation, a number of questions might arise: Where are the most pressing areas for GDPR work? What factors should we look at when setting litigation priorities? What metrics can we use, as a field, to pin-point shared litigation goals that we can collectively work towards?

At DFF’s meeting last month on unlocking the GDPR’s strategic litigation opportunities, participants discussed and critiqued a nascent framework that could help in answering some of these questions. The framework emerged from conversations DFF held leading up to the event with litigators seeking to leverage the GDPR in their digital rights litigation. It is intended to act as a tool to help establish shared GDPR litigation priorities.

Using a revised version of Maslow’s Hierarchy of Needs, the framework involves the application of a pyramid of “human needs” to the GDPR context to help establish what the most pressing issues might be when it comes to GDPR litigation. At the bottom of the pyramid are the higher priority needs of “social connection and social collaboration” (e.g. teamwork, the feeling of belonging, and family and community ties), “physiological needs” (e.g. access to food, water, shelter, and warmth) and “safety needs” (e.g. job security, adequate resources, health, and safe environments).

A Draft Framework for GDPR Litigation Priorities Based on Maslow’s Hierarchy of Needs

What do these needs have to do with the GDPR? Some of these fundamental needs can be served through the protection and promotion of data protection rights alongside other areas of human rights enforcement. For instance, some unlawful data processing practices may be hindering, deterring or even preventing individuals from accessing social security provisions and similar public services that serve a fundamental physiological need. While other unlawful data processing practices may contribute towards workplace surveillance, discrimination, and unfair hiring/dismissal practices that threaten job security, access to resources and the forging of social connections. Visualising where certain GDPR issues fall across this spectrum of human needs can be a useful way of establishing those issues that are of highest priority in our broader push for promoting and protecting the right to data protection.

During the meeting, participants reiterated that it would be useful to have a workable framework that could help establish shared goals and priorities for GDPR litigation. A number of participants observed that the draft framework was a useful way of linking GDPR work more concretely to other civil society and human rights movements. Others noted that it was an intuitive way of identifying harms or damage that might result from data protection violations and working from there, rather than a specific breach of a GDPR provision. They also noted that it was a useful exercise for taking a step back from specific GDPR non-compliance and thinking about what the real-world impact of our GDPR enforcement actions might be.

A number of constructive observations were also made about the framework and how it might be revised. For instance, using a hierarchy of needs in the human rights context gives rise to a number of concerns. Human rights should be protected equally and should not be viewed as a hierarchy of protections. Perhaps, therefore, the framework should be visualised as a circle of inter-connected human needs rather than a pyramid? The framework also fails to capture the systemic nature of some data protection harms, and how the widespread nature of some violations may render them more pressing and of higher priority. Finally, questions were raised about when and how it might be most useful to use a framework like this. Would it be most useful for mapping out organisational priorities for GDPR work? Or would it be better suited as a tool for identifying cases or communicating the impact of data protection issues to the general public?

At the event, a number of participants expressed an interest in discussing, critiquing and hacking this framework further to see if it can be developed into a practical tool for identifying and articulating strategic litigation goals. These conversations will be held in the coming weeks. We want to hear from others in the field who would like to join us in this exciting conversation, so please do get in touch if you are interested!