Bartering your privacy for essential services: Ireland’s Public Services Card

By Elizabeth Farries, 16th April 2019

In February, DFF invited the Irish Council for Civil Liberties (ICCL) to join 30 European digital rights organisations for a consultation with the UN Special Rapporteur on extreme poverty and human rights, Professor Philip Alston. The Special Rapporteur is exploring the rise of the digital welfare state and its implications for poor and economically marginalised individuals for his upcoming thematic report.

Of the themes that emerged during the consultation, the trend of states mandating welfare recipients to register for biometric identification cards is particularly relevant to the ICCL’s current campaign. In Ireland, the government has been rolling out a Public Services Card (PSC), which includes biometric features and an identity registration system that is shared amongst government databases. The PSC requires individuals to register in order to access social welfare benefits to which they are legally entitled. These individuals, by virtue of the services they are accessing, tend to be poor and economically marginalised. Furthermore, their situation has been made worse by the fact that they are living in a country that has experienced austerity in recent years. Given that the PSC is compulsory, it is the position of the ICCL that the requirement that these individuals trade their personal data and associated civil and human rights (e.g. their right to privacy) in order to access social and economic benefits is unlawful.

The growth of what are broadly referred to as identity systems or national digital identity programmes is a common problem that is not restricted to Europe. The ICCL, as a member of the International Network of Civil Liberties Organizations (INCLO), stands in solidarity with our colleagues at the Kenya Human Rights Commission (KHRC) who have been fighting back against the rollout of the National Integrated Identity Management System (NIIMS). This is a system which, similar to the Irish PSC, has sought to collect and collate personal and biometric data as a condition for receiving government services. The KHRC are challenging the legality of NIIMS in Kenya’s High Court. Last week, the KHRC achieved a legal victory in the form of an interim order which effectively barred the government from making NIIMS mandatory for access to essential services until its legality has been fully evaluated by the court.

While litigation has not commenced in Ireland as of yet, Ireland’s data protection authority, the Data Protection Commission (DPC), identified the PSC as a pressing matter. The office opened a formal audit in October 2017 and have been examining the problem ever since. The ICCL has grave concerns that the DPC have not issued a decision, particularly as the government continues to roll out the PSC and mandate it for essential services. We are further concerned that while the DPC has issued an interim report, the public has not been given access to it. The DPC on Wednesday disclosed to the Justice Committee in parliament that it, in fact, does not intend to release their report. They say that while the current data protection legislative regime authorises them to do so, the previous regime is silent on the matter. The ICCL does not accept this explanation and asserts that such a lengthy and resource intensive investigation by the publicly-funded DPC warrants at the very least the complete disclosure of their findings in the public interest.

Following the DFF consultation, the ICCL and Digital Rights Ireland invited the Special Rapporteur to visit Ireland and consider the problems presented by the PSC. We are pleased that Prof. Alston has accepted our invitation in his academic capacity and will be arriving on 29 June 2019. Please watch this space as we are planning a public event, together with meetings with national human rights organisations, community interest groups and, pending their acceptance, the DPC and responsible government bodies as well.

The Irish Government have promoted the PSC saying that it facilitates ease of service access and reduces the threat of identity fraud. Rights advocates, including the ICCL, argue that the data protection risks attached to such a scheme are not necessary for, or proportionate to, these argued benefits. DFF, through their February consultation, provided an excellent platform for the Special Rapporteur to hear concerns on the PSC. We are now looking forward to continuing this conversation with Prof. Alston directly in Ireland during his upcoming visit.

About the author: Elizabeth Farries is a lawyer and the Surveillance and Human Rights Program Manager for the ICCL and INCLO. She is also an adjunct professor in the School of Law at Trinity College Dublin.

Harnessing existing human rights jurisprudence to guide AI

By Zuzanna Warso, 8th April 2019

DFF’s 2019 strategy meeting had a flexible agenda, leaving space for participants to decide the topics to be worked on. Throughout the meeting a series of in-depth discussions was held on algorithms and AI in the context of surveillance, profiling and facial recognition systems used by the police.

You don’t have to look too hard to become acutely aware of the potential dangerous effects of the development and use of algorithms and AI. It is already common knowledge that in the wrong or reckless hands these technologies may sway elections, lead to discrimination, and inhibit freedom of speech or access to information. We have seen that negligent deployment of algorithms in judicial systems can lead to cases of unjustified pre-trial detention, and we justifiably fear the potential harmful consequences of the continued development of lethal autonomous weapons. It is then trite to say thatalgorithms and AI can pose serious challenges to individuals and the protection of their human rights.

The main challenges identified during our discussions related to the lack of technological knowledge amongst the field and the need for a comprehensive mapping of legal tools and regulations (both national and international) around algorithms and AI. There exists quite a common belief that law cannot keep up with technology, a phenomenon sometimes referred to as the “law lag”. While it is indeed the case when you think about technology-specific laws (we don’t have an “AI law” in Europe or a “law on robots”), technologies are never created in a legal vacuum. They come into an environment of general legal rules, including international human rights standards, and they should be compliant with and respect those rules.

During our discussions, we became aware of how little we knew about the litigation and research efforts that were currently being conducted by others within Europe. In light of the international human rights standards that are already in place, it may make sense for European organisations to work together to map applicable standards under the international framework and measure existing AI technologies against these long-standing principles, to determine together what action can be taken now without the need for specific AI laws. Some of this work is already being carried out by the SIENNA project, a EU-funded Horizon 2020 research project involving 13 partners including the Helsinki Foundation for Human Rights. The project research shows that AI will have wide-ranging societal and human rights implicationsand will affect a spectrum of existing human rights standards related to data protection, equality, human autonomy and self- determination, human dignity, human safety, justice and equity, non-discrimination, and privacy.

Let’s take an example. The use of AI in the workplace supposedly has the potential to improve productivity and efficiency. However, in order for this to happen, AI needs to track employees. It can be done in multiple ways. The US-based company Humanyze, for example, developed an ID badge that an employee would be required to carry at all times when at work. The badge is equipped with different devices: microchips in the badge pick up whether employees are communicating with one another, sensors monitor where they are, and an accelerometer records if they move. According to Humanyze’s website, all this is done in order to “speed up decision making and improve performance.” If you happen to value privacy (and have read Orwell’s Nineteen Eighty-Four) the concept of the badge sounds quite ominous. If you happen to be a lawyer, you may also ask if such a system complies with existing privacy laws.

In Europe, we can look to the European Court of Human Rights (ECtHR) for guidance. The ECtHR has dealt in the past with questions of how to strike a fair balance between an employer’s wish to increase efficiency and the need to ensure protection of employees’ rights to respect for their private lives. Although the judgments are not about AI, they are still relevant and applicable.

The issue of surveillance in the workplace has been addressed, for example, in the case of Bărbulescu v. Romania(application no. 61496/08). It concerned the decision of a private company to dismiss an employee after monitoring his emails and accessing their content. Mr Bărbulescu, who filed the complaint against Romania, claimed that the domestic courts failed to protect his right to respect for his private life and correspondence, protected by Article 8 of the European Convention on Human Rights. The ECtHR shared that view. What lessons can we draw from the judgment? First, we are reminded that the right to respect for private life and for the privacy of communication continues to exist in the workplace. Although there may be circumstances where certain interferences with the right may be justified in the work context, the measures adopted by employers can never reduce private social life to zero. Second, we are told that, before the surveillance starts, the employee should be notified about the nature and scope of such surveillance, e.g. whether and when the content of the communication is being accessed. Third, an employer must provide reasons that justify the measures adopted and should only use methods that are the least intrusive to pursue these objectives. Although Bărbulescu v. Romania concerned access to an employee’s emails, the ECtHR’s reasoning is relevant for assessing whether other technical means of monitoring employees comply with the European Convention on Human Rights. In other words, even if a European country does not adopt an “Act on the use of AI in the workplace”, such practices are not completely unregulated. 

The European Convention on Human Rights has proven to be able to accommodate new social and technological developments. Human rights should remain the common thread in thenormative discourse around technological development including algorithms and AI. This is not to say that our perceptions and rules, e.g. how we understand and value our autonomy and privacy, do not evolve as the technology develops and expands. The interaction between technology, policy makers and legislators is a continuing practise. This process should not, however, lead to a loss of the normative anchor that already exists in international human rights law.

About the authors: Zuzanna Warso, is project co-ordinator and researcher at the Helsinki Foundation for Human Rights, SIENNA project. Dominika Bychawska-Siniarska, member of the board of the Helsinki Foundation for Human Rights, wrote an introductory note and input was provided by Rowena Rodrigues, Trilateral Research and deputy-coordinator of the SIENNA project.

Public campaigns on digital rights: mapping the needs

By Claire Fernandez, 4th April 2019

In February 2019, I had the privilege of representing the European Digital Rights (EDRi) network at the DFF strategy meeting in Berlin. The meeting was the perfect occasion for experts, activists and litigators from the broad digital and human rights movement to explore ways of working together and of levelling up the field.

The group held discussions on several methods and avenues for social change in our field, such as advocacy and litigation. Public campaigning came up as an interesting option – many organisations want to achieve massive mobilisation, while few have managed to develop the tools and means needed for fulfilling this goal. Our breakout group discussion therefore focused on mapping the needs for pan-European campaigns on digital rights.

First, we need to define our way of doing campaigns, which might differ from other movements. A value-based campaigning method should look into questions such as: Who funds us? Do we take money from the big tech companies and if yes, at what conditions and to which amount? Who are we partnering with: a large, friendly civil society and industry coalition or a restricted core group of digital rights experts? Are we paying for advertising campaigns on social media or do we rely on privacy-friendly mobilising techniques? We all agreed that being clear on how we campaign and what our joined message is were crucial elements for the success of a campaign. A risk-management system should also be put in place to anticipate criticisms and attacks.

Second, proper field mapping is important. Pre- and post- campaign public opinion polls and focus groups are useful. Too often, we tend to go ahead with our own plans without consulting the affected groups such as those affected by hate speech online, child abuse and so on.

Third, unsurprisingly, the need for staff and resources was ranked as a priority. These include professional campaigners, support staff, graphic designers, project managers and coordinators, communication consultants and a central hub for a pan-European campaign.

Finally, we need to build and share campaigning tools that include visuals, software, websites, videos, celebrities and media contacts. Participants also mentioned the need for a safe communication infrastructure to exchange tools and coordinate actions.

At EDRi, all the above resonate as we embark on the journey of building our campaigning capacity to lead multiple pan-European campaigns. For instance, one of the current campaigns we have been involved in –– the campaign on the European Union Copyright Directive –– has revealed the importance of fulfilling these needs. Throughout this particular campaign, human rights activists have faced unprecedented accusations of being paid by Google and similar actors, and of being against the principle of fair remuneration for artists. Despite disinformation waves, distraction tactics and our small resources, the wide mobilisation of the public against problematic parts of the Directive such as upload filters has been truly impressive. We witnessed over five million petition signatures, over 170,000 protesters across Europe, dozens of activists meeting Members of the European Parliament, and impressive engagement rates on social media. The European Parliament vote, in favour of the whole Copyright Directive including controversial articles, was only won by a very narrow margin, which shows the impact of the campaign.

The EDRi network and the broader movement need to learn lessons from the Copyright campaign and properly build our campaign capacity. EDRi will start this process during its General Assembly on 7 and 8 April in London. The DFF strategy workshop held in Berlin gave us a lot of food for thought for this process.

The EDRi network and the broader movement need to learn lessons from the Copyright campaign and properly build our campaign capacity. EDRi will start this process during its General Assembly on 7 and 8 April in London. The DFF strategy workshop held in Berlin gave us a lot of food for thought for this process.

About the author: Claire Fernandez is the Executive Director of EDRi, an association of civil and human rights organisations from across Europe that defends rights and freedoms in the digital environment.