Harnessing existing human rights jurisprudence to guide AI

By Zuzanna Warso, 8th April 2019

DFF’s 2019 strategy meeting had a flexible agenda, leaving space for participants to decide the topics to be worked on. Throughout the meeting a series of in-depth discussions was held on algorithms and AI in the context of surveillance, profiling and facial recognition systems used by the police.

You don’t have to look too hard to become acutely aware of the potential dangerous effects of the development and use of algorithms and AI. It is already common knowledge that in the wrong or reckless hands these technologies may sway elections, lead to discrimination, and inhibit freedom of speech or access to information. We have seen that negligent deployment of algorithms in judicial systems can lead to cases of unjustified pre-trial detention, and we justifiably fear the potential harmful consequences of the continued development of lethal autonomous weapons. It is then trite to say thatalgorithms and AI can pose serious challenges to individuals and the protection of their human rights.

The main challenges identified during our discussions related to the lack of technological knowledge amongst the field and the need for a comprehensive mapping of legal tools and regulations (both national and international) around algorithms and AI. There exists quite a common belief that law cannot keep up with technology, a phenomenon sometimes referred to as the “law lag”. While it is indeed the case when you think about technology-specific laws (we don’t have an “AI law” in Europe or a “law on robots”), technologies are never created in a legal vacuum. They come into an environment of general legal rules, including international human rights standards, and they should be compliant with and respect those rules.

During our discussions, we became aware of how little we knew about the litigation and research efforts that were currently being conducted by others within Europe. In light of the international human rights standards that are already in place, it may make sense for European organisations to work together to map applicable standards under the international framework and measure existing AI technologies against these long-standing principles, to determine together what action can be taken now without the need for specific AI laws. Some of this work is already being carried out by the SIENNA project, a EU-funded Horizon 2020 research project involving 13 partners including the Helsinki Foundation for Human Rights. The project research shows that AI will have wide-ranging societal and human rights implicationsand will affect a spectrum of existing human rights standards related to data protection, equality, human autonomy and self- determination, human dignity, human safety, justice and equity, non-discrimination, and privacy.

Let’s take an example. The use of AI in the workplace supposedly has the potential to improve productivity and efficiency. However, in order for this to happen, AI needs to track employees. It can be done in multiple ways. The US-based company Humanyze, for example, developed an ID badge that an employee would be required to carry at all times when at work. The badge is equipped with different devices: microchips in the badge pick up whether employees are communicating with one another, sensors monitor where they are, and an accelerometer records if they move. According to Humanyze’s website, all this is done in order to “speed up decision making and improve performance.” If you happen to value privacy (and have read Orwell’s Nineteen Eighty-Four) the concept of the badge sounds quite ominous. If you happen to be a lawyer, you may also ask if such a system complies with existing privacy laws.

In Europe, we can look to the European Court of Human Rights (ECtHR) for guidance. The ECtHR has dealt in the past with questions of how to strike a fair balance between an employer’s wish to increase efficiency and the need to ensure protection of employees’ rights to respect for their private lives. Although the judgments are not about AI, they are still relevant and applicable.

The issue of surveillance in the workplace has been addressed, for example, in the case of Bărbulescu v. Romania(application no. 61496/08). It concerned the decision of a private company to dismiss an employee after monitoring his emails and accessing their content. Mr Bărbulescu, who filed the complaint against Romania, claimed that the domestic courts failed to protect his right to respect for his private life and correspondence, protected by Article 8 of the European Convention on Human Rights. The ECtHR shared that view. What lessons can we draw from the judgment? First, we are reminded that the right to respect for private life and for the privacy of communication continues to exist in the workplace. Although there may be circumstances where certain interferences with the right may be justified in the work context, the measures adopted by employers can never reduce private social life to zero. Second, we are told that, before the surveillance starts, the employee should be notified about the nature and scope of such surveillance, e.g. whether and when the content of the communication is being accessed. Third, an employer must provide reasons that justify the measures adopted and should only use methods that are the least intrusive to pursue these objectives. Although Bărbulescu v. Romania concerned access to an employee’s emails, the ECtHR’s reasoning is relevant for assessing whether other technical means of monitoring employees comply with the European Convention on Human Rights. In other words, even if a European country does not adopt an “Act on the use of AI in the workplace”, such practices are not completely unregulated. 

The European Convention on Human Rights has proven to be able to accommodate new social and technological developments. Human rights should remain the common thread in thenormative discourse around technological development including algorithms and AI. This is not to say that our perceptions and rules, e.g. how we understand and value our autonomy and privacy, do not evolve as the technology develops and expands. The interaction between technology, policy makers and legislators is a continuing practise. This process should not, however, lead to a loss of the normative anchor that already exists in international human rights law.

About the authors: Zuzanna Warso, is project co-ordinator and researcher at the Helsinki Foundation for Human Rights, SIENNA project. Dominika Bychawska-Siniarska, member of the board of the Helsinki Foundation for Human Rights, wrote an introductory note and input was provided by Rowena Rodrigues, Trilateral Research and deputy-coordinator of the SIENNA project.

Public campaigns on digital rights: mapping the needs

By Claire Fernandez, 4th April 2019

In February 2019, I had the privilege of representing the European Digital Rights (EDRi) network at the DFF strategy meeting in Berlin. The meeting was the perfect occasion for experts, activists and litigators from the broad digital and human rights movement to explore ways of working together and of levelling up the field.

The group held discussions on several methods and avenues for social change in our field, such as advocacy and litigation. Public campaigning came up as an interesting option – many organisations want to achieve massive mobilisation, while few have managed to develop the tools and means needed for fulfilling this goal. Our breakout group discussion therefore focused on mapping the needs for pan-European campaigns on digital rights.

First, we need to define our way of doing campaigns, which might differ from other movements. A value-based campaigning method should look into questions such as: Who funds us? Do we take money from the big tech companies and if yes, at what conditions and to which amount? Who are we partnering with: a large, friendly civil society and industry coalition or a restricted core group of digital rights experts? Are we paying for advertising campaigns on social media or do we rely on privacy-friendly mobilising techniques? We all agreed that being clear on how we campaign and what our joined message is were crucial elements for the success of a campaign. A risk-management system should also be put in place to anticipate criticisms and attacks.

Second, proper field mapping is important. Pre- and post- campaign public opinion polls and focus groups are useful. Too often, we tend to go ahead with our own plans without consulting the affected groups such as those affected by hate speech online, child abuse and so on.

Third, unsurprisingly, the need for staff and resources was ranked as a priority. These include professional campaigners, support staff, graphic designers, project managers and coordinators, communication consultants and a central hub for a pan-European campaign.

Finally, we need to build and share campaigning tools that include visuals, software, websites, videos, celebrities and media contacts. Participants also mentioned the need for a safe communication infrastructure to exchange tools and coordinate actions.

At EDRi, all the above resonate as we embark on the journey of building our campaigning capacity to lead multiple pan-European campaigns. For instance, one of the current campaigns we have been involved in –– the SaveYourInternet.eu campaign on the European Union Copyright Directive –– has revealed the importance of fulfilling these needs. Throughout this particular campaign, human rights activists have faced unprecedented accusations of being paid by Google and similar actors, and of being against the principle of fair remuneration for artists. Despite disinformation waves, distraction tactics and our small resources, the wide mobilisation of the public against problematic parts of the Directive such as upload filters has been truly impressive. We witnessed over five million petition signatures, over 170,000 protesters across Europe, dozens of activists meeting Members of the European Parliament, and impressive engagement rates on social media. The European Parliament vote, in favour of the whole Copyright Directive including controversial articles, was only won by a very narrow margin, which shows the impact of the campaign.

The EDRi network and the broader movement need to learn lessons from the Copyright campaign and properly build our campaign capacity. EDRi will start this process during its General Assembly on 7 and 8 April in London. The DFF strategy workshop held in Berlin gave us a lot of food for thought for this process.

The EDRi network and the broader movement need to learn lessons from the Copyright campaign and properly build our campaign capacity. EDRi will start this process during its General Assembly on 7 and 8 April in London. The DFF strategy workshop held in Berlin gave us a lot of food for thought for this process.

About the author: Claire Fernandez is the Executive Director of EDRi, an association of civil and human rights organisations from across Europe that defends rights and freedoms in the digital environment.

Digital rights are *all* human rights, not just civil and political

By Jonathan McCully, 27th February 2019

The UN Special Rapporteur on extreme poverty and human rights consults with the field

Last week, following our strategy meeting, DFF hosted the UN Special Rapporteur on extreme poverty and human rights, Professor Philip Alston, for a one-day consultation in preparation for his upcoming thematic report on the rise of the “digital welfare state” and its implications for the human rights of poor and vulnerable individuals. The consultation brought together 30 digital rights organisations from across Europe, who shared many examples of new technologies being deployed in the provision of various public services. Common themes emerged, from the increased use of risk indication scoring in identifying welfare fraud, to the mandating of welfare recipients to register for bio-metric identification cards, and the sharing of datasets between different public services and government departments.

At DFF, we subscribe to the mantra that “digital rights are human rights” and we define “digital rights” broadly as human rights applicable in the digital sphere. This consultation highlighted the true breadth of human rights issues that are engaged by the development, deployment, application and regulation of new technologies in numerous aspects of our lives. While many conversations on digital rights tend to centre around civil and political rights –– particularly the rights to freedom of expression and to privacy –– this consultation brought into sharp focus the impact new technologies can have on socio-economic rights such as the right to education, the right to housing, the right to health and, particularly relevant for this consultation, the right to social security.

The UN Special Mandates have already started delving into issues around automated decision-making in a broad spectrum of human rights contexts. In August last year, the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression produced a detailed report on the influence of artificial intelligence on the global information environment. This follows on from thematic reports on the human rights implications of “killer robots” and “care robots” by the UN Special Rapporteur on extrajudicial, summary or arbitrary executions and the UN Special Rapporteur on the enjoyment of all human rights by older persons, respectively.

The UN Special Rapporteur on extreme poverty and human rights has similarly placed the examination of automated decision-making and its impact on human rights at the core of his work. This can already be seen from his reports following his country visits to the United States and United Kingdom. In December 2017, following his visit to the United States, he reported on the datafication of the homeless population through systems designed to match homeless people with homeless services (i.e. coordinated entry systems) and the increased use of risk-assessment tools in pre-trial release and custody decisions. More recently, following his visit to the United Kingdom, he criticised the increased automation of various aspects of the benefits system and the “gradual disappearance of the postwar British welfare state behind a webpage and an algorithm.” In these contexts, he observed that the poor are often the testing ground for the government’s introduction of new technologies.

The next report will build upon this important work, and we hope that the regional consultation held last week will provide useful input in this regard. Our strategy meeting presented a great opportunity to bring together great digital rights minds who could provide the Special Rapporteur with an overview of the use of digital technologies in welfare systems across Europe and their impact. It was evident from the discussions that the digital welfare state raises serious human rights concerns; not only when it comes to the right to social security, but the right to privacy and data protection, the right to freedom of information, and the right to an effective remedy are also engaged. As one participant observed, the digital welfare state seems to present welfare applicants with a trade-off: give up some of your civil and political rights in order to exercise some of your socio-economic rights.

It was clear from the room that participants were already exploring potential litigation strategies to push back against the digital welfare state, and we look forward to supporting them in this effort.