Strategising on GDPR complaints to take on the AdTech industry

By Martha Dark, 30th April 2019

The Digital Freedom Fund recently hosted a two-day strategy meeting in Berlin. Attended by 48 lawyers, activists and change makers from across Europe, it was a fantastic opportunity to collaborate, strategise and plan around ongoing and planned litigation. DFF played an important role in bringing this group of people together and in designing an agenda focused on assisting collaborative efforts to bring effective digital rights litigation across Europe.

The meeting was participant led and outcome driven. Attendees had the opportunity to ‘pitch’ their work or idea around digital rights litigation to the other participants and NGOs present. I had the pleasure of attending and didn’t wait long to make the most of this meeting structure.

My colleague at Open Rights Group, Jim Killock, is one of the complainants in ongoing complaints before multiple European Data Protection Authorities. These complaints have been made against Google and others in the AdTech sector, and concern the huge, ongoing data breaches that affect virtually anyone who uses the Internet.

What is the issue? When you visit a website and you are shown a “behavioural” ad on a website, your personal data (this can be what you are reading or watching, your location, your IP address) is broadcast to multiple companies. These websites do this to solicit potential advertisers’ bids for the attention of the specific individual visiting the website – so they share this data widely. This all happens in the moment that it takes for your webpage to load and this is known as a ‘bid request’.

The problem is that this system constitutes a data breach. This broadcast/bid request does not protect an Internet user’s personal data from unauthorized access. Article 5(1) of the General Data Protection Regulation (GDPR) requires personal data be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss.” Some of these companies are failing to protect the data in line with this legal principle/obligation.

So far Jim and his co-complainants have brought complaints before the UK Information Commissioner’s Office (here) and the Irish Data Protection Commission (here) about the practice described above.

Given this issue is far wider reaching than the UK and Ireland, and impacts Internet users on a global scale, the team are looking to partner with organisations that might bring co-ordinated complaints before Data Protection Authorities in certain jurisdictions. The strategy meeting, hosted by DFF, provided the opportunity to discuss the possibility of multiple complaints with lawyers and experts from across Europe.

The Poland-based Panoptykon Foundation has already brought a similar complaint to the Polish Data Protection Authority. Katarzyna Szymielewicz (President of The Panoptykon Foundation) also attended the strategy meeting and we were grateful for the opportunity to hold sessions and side meetings there with the aim of engaging with other potential partners from across Europe who may also be interested in joining our efforts to raise this issue effectively before a number of Data Protection Authorities. 

At the strategy meeting, Katarzyna held a series of fifteen-minute lightning talks on how the real time bidding process works, how it fails to comply with the GDPR and what the most concerning data protection issues are arising from it. Since the strategy meeting we have entered into conversations with multiple NGOs across Europe and we are exploring working together to bring further complaints across the continent.

The DFF meeting was a fantastic opportunity for us to move our work forward effectively in order to get the input of our colleagues and to build stronger litigation with the community.

About the author: Martha Dark is Chief Operating Officer of the UK based NGO Open Rights Group.

Bartering your privacy for essential services: Ireland’s Public Services Card

By Elizabeth Farries, 16th April 2019

In February, DFF invited the Irish Council for Civil Liberties (ICCL) to join 30 European digital rights organisations for a consultation with the UN Special Rapporteur on extreme poverty and human rights, Professor Philip Alston. The Special Rapporteur is exploring the rise of the digital welfare state and its implications for poor and economically marginalised individuals for his upcoming thematic report.

Of the themes that emerged during the consultation, the trend of states mandating welfare recipients to register for biometric identification cards is particularly relevant to the ICCL’s current campaign. In Ireland, the government has been rolling out a Public Services Card (PSC), which includes biometric features and an identity registration system that is shared amongst government databases. The PSC requires individuals to register in order to access social welfare benefits to which they are legally entitled. These individuals, by virtue of the services they are accessing, tend to be poor and economically marginalised. Furthermore, their situation has been made worse by the fact that they are living in a country that has experienced austerity in recent years. Given that the PSC is compulsory, it is the position of the ICCL that the requirement that these individuals trade their personal data and associated civil and human rights (e.g. their right to privacy) in order to access social and economic benefits is unlawful.

The growth of what are broadly referred to as identity systems or national digital identity programmes is a common problem that is not restricted to Europe. The ICCL, as a member of the International Network of Civil Liberties Organizations (INCLO), stands in solidarity with our colleagues at the Kenya Human Rights Commission (KHRC) who have been fighting back against the rollout of the National Integrated Identity Management System (NIIMS). This is a system which, similar to the Irish PSC, has sought to collect and collate personal and biometric data as a condition for receiving government services. The KHRC are challenging the legality of NIIMS in Kenya’s High Court. Last week, the KHRC achieved a legal victory in the form of an interim order which effectively barred the government from making NIIMS mandatory for access to essential services until its legality has been fully evaluated by the court.

While litigation has not commenced in Ireland as of yet, Ireland’s data protection authority, the Data Protection Commission (DPC), identified the PSC as a pressing matter. The office opened a formal audit in October 2017 and have been examining the problem ever since. The ICCL has grave concerns that the DPC have not issued a decision, particularly as the government continues to roll out the PSC and mandate it for essential services. We are further concerned that while the DPC has issued an interim report, the public has not been given access to it. The DPC on Wednesday disclosed to the Justice Committee in parliament that it, in fact, does not intend to release their report. They say that while the current data protection legislative regime authorises them to do so, the previous regime is silent on the matter. The ICCL does not accept this explanation and asserts that such a lengthy and resource intensive investigation by the publicly-funded DPC warrants at the very least the complete disclosure of their findings in the public interest.

Following the DFF consultation, the ICCL and Digital Rights Ireland invited the Special Rapporteur to visit Ireland and consider the problems presented by the PSC. We are pleased that Prof. Alston has accepted our invitation in his academic capacity and will be arriving on 29 June 2019. Please watch this space as we are planning a public event, together with meetings with national human rights organisations, community interest groups and, pending their acceptance, the DPC and responsible government bodies as well.

The Irish Government have promoted the PSC saying that it facilitates ease of service access and reduces the threat of identity fraud. Rights advocates, including the ICCL, argue that the data protection risks attached to such a scheme are not necessary for, or proportionate to, these argued benefits. DFF, through their February consultation, provided an excellent platform for the Special Rapporteur to hear concerns on the PSC. We are now looking forward to continuing this conversation with Prof. Alston directly in Ireland during his upcoming visit.

About the author: Elizabeth Farries is a lawyer and the Surveillance and Human Rights Program Manager for the ICCL and INCLO. She is also an adjunct professor in the School of Law at Trinity College Dublin.

Harnessing existing human rights jurisprudence to guide AI

By Zuzanna Warso, 8th April 2019

DFF’s 2019 strategy meeting had a flexible agenda, leaving space for participants to decide the topics to be worked on. Throughout the meeting a series of in-depth discussions was held on algorithms and AI in the context of surveillance, profiling and facial recognition systems used by the police.

You don’t have to look too hard to become acutely aware of the potential dangerous effects of the development and use of algorithms and AI. It is already common knowledge that in the wrong or reckless hands these technologies may sway elections, lead to discrimination, and inhibit freedom of speech or access to information. We have seen that negligent deployment of algorithms in judicial systems can lead to cases of unjustified pre-trial detention, and we justifiably fear the potential harmful consequences of the continued development of lethal autonomous weapons. It is then trite to say thatalgorithms and AI can pose serious challenges to individuals and the protection of their human rights.

The main challenges identified during our discussions related to the lack of technological knowledge amongst the field and the need for a comprehensive mapping of legal tools and regulations (both national and international) around algorithms and AI. There exists quite a common belief that law cannot keep up with technology, a phenomenon sometimes referred to as the “law lag”. While it is indeed the case when you think about technology-specific laws (we don’t have an “AI law” in Europe or a “law on robots”), technologies are never created in a legal vacuum. They come into an environment of general legal rules, including international human rights standards, and they should be compliant with and respect those rules.

During our discussions, we became aware of how little we knew about the litigation and research efforts that were currently being conducted by others within Europe. In light of the international human rights standards that are already in place, it may make sense for European organisations to work together to map applicable standards under the international framework and measure existing AI technologies against these long-standing principles, to determine together what action can be taken now without the need for specific AI laws. Some of this work is already being carried out by the SIENNA project, a EU-funded Horizon 2020 research project involving 13 partners including the Helsinki Foundation for Human Rights. The project research shows that AI will have wide-ranging societal and human rights implicationsand will affect a spectrum of existing human rights standards related to data protection, equality, human autonomy and self- determination, human dignity, human safety, justice and equity, non-discrimination, and privacy.

Let’s take an example. The use of AI in the workplace supposedly has the potential to improve productivity and efficiency. However, in order for this to happen, AI needs to track employees. It can be done in multiple ways. The US-based company Humanyze, for example, developed an ID badge that an employee would be required to carry at all times when at work. The badge is equipped with different devices: microchips in the badge pick up whether employees are communicating with one another, sensors monitor where they are, and an accelerometer records if they move. According to Humanyze’s website, all this is done in order to “speed up decision making and improve performance.” If you happen to value privacy (and have read Orwell’s Nineteen Eighty-Four) the concept of the badge sounds quite ominous. If you happen to be a lawyer, you may also ask if such a system complies with existing privacy laws.

In Europe, we can look to the European Court of Human Rights (ECtHR) for guidance. The ECtHR has dealt in the past with questions of how to strike a fair balance between an employer’s wish to increase efficiency and the need to ensure protection of employees’ rights to respect for their private lives. Although the judgments are not about AI, they are still relevant and applicable.

The issue of surveillance in the workplace has been addressed, for example, in the case of Bărbulescu v. Romania(application no. 61496/08). It concerned the decision of a private company to dismiss an employee after monitoring his emails and accessing their content. Mr Bărbulescu, who filed the complaint against Romania, claimed that the domestic courts failed to protect his right to respect for his private life and correspondence, protected by Article 8 of the European Convention on Human Rights. The ECtHR shared that view. What lessons can we draw from the judgment? First, we are reminded that the right to respect for private life and for the privacy of communication continues to exist in the workplace. Although there may be circumstances where certain interferences with the right may be justified in the work context, the measures adopted by employers can never reduce private social life to zero. Second, we are told that, before the surveillance starts, the employee should be notified about the nature and scope of such surveillance, e.g. whether and when the content of the communication is being accessed. Third, an employer must provide reasons that justify the measures adopted and should only use methods that are the least intrusive to pursue these objectives. Although Bărbulescu v. Romania concerned access to an employee’s emails, the ECtHR’s reasoning is relevant for assessing whether other technical means of monitoring employees comply with the European Convention on Human Rights. In other words, even if a European country does not adopt an “Act on the use of AI in the workplace”, such practices are not completely unregulated. 

The European Convention on Human Rights has proven to be able to accommodate new social and technological developments. Human rights should remain the common thread in thenormative discourse around technological development including algorithms and AI. This is not to say that our perceptions and rules, e.g. how we understand and value our autonomy and privacy, do not evolve as the technology develops and expands. The interaction between technology, policy makers and legislators is a continuing practise. This process should not, however, lead to a loss of the normative anchor that already exists in international human rights law.

About the authors: Zuzanna Warso, is project co-ordinator and researcher at the Helsinki Foundation for Human Rights, SIENNA project. Dominika Bychawska-Siniarska, member of the board of the Helsinki Foundation for Human Rights, wrote an introductory note and input was provided by Rowena Rodrigues, Trilateral Research and deputy-coordinator of the SIENNA project.