Harnessing existing human rights jurisprudence to guide AI

Harnessing existing human rights jurisprudence to guide AI

By Zuzanna Warso, 8th April 2019

DFF’s 2019 strategy meeting had a flexible agenda, leaving space for participants to decide the topics to be worked on. Throughout the meeting a series of in-depth discussions was held on algorithms and AI in the context of surveillance, profiling and facial recognition systems used by the police.

You don’t have to look too hard to become acutely aware of the potential dangerous effects of the development and use of algorithms and AI. It is already common knowledge that in the wrong or reckless hands these technologies may sway elections, lead to discrimination, and inhibit freedom of speech or access to information. We have seen that negligent deployment of algorithms in judicial systems can lead to cases of unjustified pre-trial detention, and we justifiably fear the potential harmful consequences of the continued development of lethal autonomous weapons. It is then trite to say thatalgorithms and AI can pose serious challenges to individuals and the protection of their human rights.

The main challenges identified during our discussions related to the lack of technological knowledge amongst the field and the need for a comprehensive mapping of legal tools and regulations (both national and international) around algorithms and AI. There exists quite a common belief that law cannot keep up with technology, a phenomenon sometimes referred to as the “law lag”. While it is indeed the case when you think about technology-specific laws (we don’t have an “AI law” in Europe or a “law on robots”), technologies are never created in a legal vacuum. They come into an environment of general legal rules, including international human rights standards, and they should be compliant with and respect those rules.

During our discussions, we became aware of how little we knew about the litigation and research efforts that were currently being conducted by others within Europe. In light of the international human rights standards that are already in place, it may make sense for European organisations to work together to map applicable standards under the international framework and measure existing AI technologies against these long-standing principles, to determine together what action can be taken now without the need for specific AI laws. Some of this work is already being carried out by the SIENNA project, a EU-funded Horizon 2020 research project involving 13 partners including the Helsinki Foundation for Human Rights. The project research shows that AI will have wide-ranging societal and human rights implicationsand will affect a spectrum of existing human rights standards related to data protection, equality, human autonomy and self- determination, human dignity, human safety, justice and equity, non-discrimination, and privacy.

Let’s take an example. The use of AI in the workplace supposedly has the potential to improve productivity and efficiency. However, in order for this to happen, AI needs to track employees. It can be done in multiple ways. The US-based company Humanyze, for example, developed an ID badge that an employee would be required to carry at all times when at work. The badge is equipped with different devices: microchips in the badge pick up whether employees are communicating with one another, sensors monitor where they are, and an accelerometer records if they move. According to Humanyze’s website, all this is done in order to “speed up decision making and improve performance.” If you happen to value privacy (and have read Orwell’s Nineteen Eighty-Four) the concept of the badge sounds quite ominous. If you happen to be a lawyer, you may also ask if such a system complies with existing privacy laws.

In Europe, we can look to the European Court of Human Rights (ECtHR) for guidance. The ECtHR has dealt in the past with questions of how to strike a fair balance between an employer’s wish to increase efficiency and the need to ensure protection of employees’ rights to respect for their private lives. Although the judgments are not about AI, they are still relevant and applicable.

The issue of surveillance in the workplace has been addressed, for example, in the case of Bărbulescu v. Romania(application no. 61496/08). It concerned the decision of a private company to dismiss an employee after monitoring his emails and accessing their content. Mr Bărbulescu, who filed the complaint against Romania, claimed that the domestic courts failed to protect his right to respect for his private life and correspondence, protected by Article 8 of the European Convention on Human Rights. The ECtHR shared that view. What lessons can we draw from the judgment? First, we are reminded that the right to respect for private life and for the privacy of communication continues to exist in the workplace. Although there may be circumstances where certain interferences with the right may be justified in the work context, the measures adopted by employers can never reduce private social life to zero. Second, we are told that, before the surveillance starts, the employee should be notified about the nature and scope of such surveillance, e.g. whether and when the content of the communication is being accessed. Third, an employer must provide reasons that justify the measures adopted and should only use methods that are the least intrusive to pursue these objectives. Although Bărbulescu v. Romania concerned access to an employee’s emails, the ECtHR’s reasoning is relevant for assessing whether other technical means of monitoring employees comply with the European Convention on Human Rights. In other words, even if a European country does not adopt an “Act on the use of AI in the workplace”, such practices are not completely unregulated. 

The European Convention on Human Rights has proven to be able to accommodate new social and technological developments. Human rights should remain the common thread in thenormative discourse around technological development including algorithms and AI. This is not to say that our perceptions and rules, e.g. how we understand and value our autonomy and privacy, do not evolve as the technology develops and expands. The interaction between technology, policy makers and legislators is a continuing practise. This process should not, however, lead to a loss of the normative anchor that already exists in international human rights law.

About the authors: Zuzanna Warso, is project co-ordinator and researcher at the Helsinki Foundation for Human Rights, SIENNA project. Dominika Bychawska-Siniarska, member of the board of the Helsinki Foundation for Human Rights, wrote an introductory note and input was provided by Rowena Rodrigues, Trilateral Research and deputy-coordinator of the SIENNA project.