Digital rights are human rights. And you can defend them in court
Digital rights are human rights. And you can defend them in court
On 5 May 2019, DFF Director Nani Jansen Reventlow delivered the annual Godwin Lecture*, organised by Bits of Freedom, De Correspondent and ARTIS to mark liberation day in the Netherlands. This is an English translation of the transcript of that talk. The Dutch original can be accessed here.
*The title of the lecture refers to the ‘law’ of Mike Godwin, which states that every online discussion will eventually become a discussion about WWII. Whoever dares to draw a comparison with WWII, will quickly be accused of ‘a Godwin’ and the discussion thread will be closed. The negative connotation of Godwin’s law fosters a taboo around drawing historical comparisons. Bits of Freedom wants to break this taboo and believes that keeping an eye on the past will actually help further the discussion on state surveillance and privacy.
Mr Bărbulescu, a Romanian sales engineer, was fired by his employer on the 1st of August 2007. When he started his job, he had, at the instruction of his boss, installed the instant messaging client Yahoo Messenger on his work computer: with it, he could easily respond to any client questions. Later, his employer had discovered that Bărbulescu, against instructions, had also used the application for private purposes. When he objected, his manager showed him a printout of 45 pages with messages he had exchanged with, amongst others, his brother and fiancée. Some of these messages were “of an intimate nature”.
Mr Bărbulescu challenged his discharge in court: first in Romania and when he did not win there, at the European Court of Human Rights. The Court found that his right to privacy had been violated. While there can be grounds for an employer to monitor the communication of employees during working hours, this must be reasonable and meet certain requirements. In this case, the Court said, the employer did not meet those and Mr Bărbulescu should not have been fired.
The judgment of the Court does not only apply to Mr Bărbulescu: the guidelines the judges gave in their verdict for monitoring communications during working hours apply to all employees in all countries of the Council of Europe. This means that thanks to this case hundreds of millions of people are now better protected when their employer wants to monitor their use of e-mail, phone or messenger during working hours.
While the Court also mentioned brand-new privacy laws (the GDPR), the ruling was mostly based on a treaty that is already 66 years old: the European Convention on Human Rights. This convention, like the Universal Declaration of Human Rights that celebrated its 70th birthday last December, was influenced by the horrors of two world wars and the wish to shape the world in a better way to prevent another one.
Equal rights for everyone
The Universal Declaration of Human Rights opens with the principle that equal rights for everyone forms the foundation for peace and justice:
“Whereas recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world.”
… and continues by linking the horrors that were committed in recent (and more distant) history because human rights were not respected:
“Whereas disregard and contempt for human rights have resulted in barbarous acts which have outraged the conscience of mankind, and the advent of a world in which human beings shall enjoy freedom of speech and belief and freedom from fear and want has been proclaimed as the highest aspiration of the common people.”
To prevent such ‘barbarous acts’ from happening again, the rule of law must protect human rights:
“Whereas it is essential, if man is not to be compelled to have recourse, as a last resort, to rebellion against tyranny and oppression, that human rights should be protected by the rule of law.”
Of course, the Universal Declaration was not the first declaration of human rights. There are different ‘Bills of Rights’, declarations and constitutions that capture similar principles. But the idea to create an international framework for the protection of human rights was proposed for the first time by the Chilean lawyer Alejandro Alvarez at an international conference during the First World War, in 1917.
Directly after the First World War, the League of Nations was founded in 1919; an intergovernmental organisation founded to end all war. But as early as the twenties and thirties, country after country left the League, and when member states invaded each other’s territories and the Second World War started in 1939, nobody took the institution seriously anymore. During the Second World War the major powers plotted to create a new international alliance: the United Nations. The parties involved were determined that the way in which countries treated their citizens in the future could no longer be treated as an exclusively internal matter. The idea of absolute sovereignty was no longer acceptable after the horrors of the war.
Therefore, it is curious that in early versions of the Charter of the United Nations no mention at all is made of human rights. Not one of the parties present at the conference at Dumbarton Oaks, the American town where the first version of the Charter was drafted in 1944, found it necessary to reference human rights – out of fear that the restriction of state sovereignty by recognising universal human rights could lead to trouble in the British colonies, or to turmoil in the segregated South of the United States. Only after diplomacy from predominantly Latin American countries and lobbying by non-governmental organisations, the promotion of human rights was included in the UN Charter.
A new international order
The 1948 Universal Declaration of Human Rights formed one of the cornerstones of a new international order, together with the Nuremberg principles of international law – developed by the allied forces during the trials of German and Japanese war criminals in 1946 – and the Genocide Convention of 1948. A new international order that is relevant to this day: we currently have an International Criminal Court that can try genocide and other crimes against humanity when nations cannot or will not do this themselves, and three regional courts that can make binding judgments on human rights cases. Judgments that often, as in the case of Mr Bărbulescu, affect developments that were unforeseeable during the drafting of the Universal Declaration. During the meetings in 1947 and 1948, discussion would not have been about Yahoo Messenger, Facebook, or Google, but the human rights principles that the representatives drafted at the time are also relevant in this context.
While the concept of human rights is often criticised – it is said to be an ineffective, Western imperialistic system – it is a framework that I believe in. It not only shows where the boundaries lie – the red line that cannot be crossed – but it is also a framework that is flexible enough to stay relevant in an ever-changing context. When we discussed future scenarios with digital rights activists at a meeting last year, we concluded that our current human rights framework offer enough tools to deal with upcoming threats, even after 70 years. For example, we all have the right to a fair trial. Developments in so-called ‘e-courts’ and ‘robot judges’ often don’t provide these safeguards, but there is a solid basis of human rights standards and rulings you can use to defend yourself against them.
But a framework alone is not enough. Human rights only mean something when that system of values is acted upon: countries must enable the exercising of these rights and cannot actively infringe on the rights of their citizens. And, when countries do not abide by these rules, there must be independent authorities where citizens can go to enforce them.
Courts are important gatekeepers in this context. Not only do they have the final say in judging complaints about human rights violations, they also play a crucial role in keeping the human rights framework itself relevant. Often courts can act more quickly on recent events – including the rapid technological developments nowadays – than politicians and lawmakers. Because the European Court in the case of Mr Bărbulescu interpreted the concept of ‘confidentiality of correspondence’ in a way that fits our digital age, tomorrow during working hours we can send a WhatsApp message with impunity.
The role of the judge: the right to privacy as an example
Most people think the concept of ‘privacy’ was invented in the United States, but that is incorrect. Often people refer to the famous article in the 1890 Harvard Law Review (‘The Right to Privacy’) by American lawyers Warren and Brandeis, but an English court already formulated the concept 40 years earlier, in the Prince Albert v. Strange case.
In 1849, Prince Albert, husband of the British Queen Victoria, filed a suit to prevent the intended publication of their artworks. Victoria and Albert loved making engravings in their spare time: they made sketches of each other, their children, dogs, birds, fairies – a mixture of intimate scenes and fantasy. Engravings have to be printed and their printer had not only made prints for the Royal couple but had also secretly made prints for himself. At a certain moment he sold no less than sixty of these clandestine prints (for the princely sum of 5 British Pounds, which in today’s value is approximately 730 Euros) to a certain Mr Judge, who wanted to display and sell the prints. Judge also wanted to publish a catalogue of the prints, which would be produced by a printer, Mr Strange.
When Victoria and Albert heard this, King Consort Albert demanded – the monarch herself cannot sue anyone or vice versa – a ban on the publication, exhibition or otherwise sharing of the engravings. With success: not only did the court grant the ban, the judgment gave a prominent place to the concept of privacy for the first time. In its consideration the court said: ‘the current case, where the privacy is the right invaded…’ thereby acknowledging privacy as a right on its own.
Fast forward to 2013.
When whistle-blower Edward Snowden revealed that the American secret services had direct access to personal information at companies like Facebook, Microsoft, and Apple, alarm bells went off for Austrian law student, Max Schrems. Schrems had already been investigating the way Facebook handled the information of its users; after a conversation with a lawyer from Facebook, he filed a request to access his information on the platform. In response, he received a CD-ROM with 1.222 pages of information, including data he had deleted online, and data from friends of his own Facebook friends – people he did not know at all. Max filed a complaint with the Data Protection Authority in Ireland, where Facebook has its registered office in Europe. By transferring personal data to the United States, he argued, Facebook was playing the role of purveyor of personal data to the NSA.
The mandate permitting Facebook to send data to the US was the so-called Safe Harbour Agreement. Based on European privacy legislation at the time, companies could only send personal data to countries outside of the EU when those countries could provide an equal level of protection to EU standards. The United States, to which a lot of data is sent, but where the legal protection of privacy across different states is far from universal, did not offer sufficient safeguards. To still enable the export of data, the European Commission and the US concluded the Safe Harbour Agreement in 1998. American companies who joined the Safe Harbour Framework would then be deemed safe enough for data-export.
Schrems’ complaint was referred to the Court of Justice of the European Union, which on 6 October 2015 declared Safe Harbour invalid. A new framework had to be developed. To put it differently: because a student got upset and went to court, all EU citizens got a chance for better protection of their personal data.
Strategic litigation: when the cause exceeds the case
Cases like that of Max Schrems are part of a long tradition of going to court, not only to protect human rights but also to promote them. These are not just regular court cases but strategic litigation, in which the goal pursued transcends that of the individual case. A nice recent example is the Urgenda case here in the Netherlands, where the government was forced to adhere to their commitments to prevent climate change. Many strategic cases from the United States appeal to our imagination, like Brown v. Board of Education, a decision that enabled the desegregation of schools. Or Obergefell v. Hodges, which finally legalised gay marriage in 2015.
But there is a rich history of social lawyering here in Europe as well, which resulted in many victories, including on the right to hold demonstrations, or the right to freedom of speech. Some cases were big and dramatic, and resulted in huge leaps forward in a short timespan; others were years or decades in the making and part of a carefully planned strategy. What all these cases have in common is that they allow courts to protect human rights where legislation and policy do not suffice.
I started with a Godwin link when talking about the development of our current human rights system. There is a second Godwin to be made that can teach us important lessons on the importance of not only holding nations and governments to account, but companies as well.
IBM as the technological purveyor to the Nazi regime
When Hans de Zwart gave the first Godwin lecture in 2014, as Director of Bits of Freedom, he talked about the unconscionable way in which IBM, through its subsidiary Dehomag, made themselves the technological purveyor to the Nazi regime. Under the slogan, ‘if it can be done, it should be done,’ concentration camps were supplied with an efficient punch card system from IBM. As Hans mentioned at the time, this raises questions about the current activities of IBM, which with great enthusiasm has thrown itself into the market for so-called smart cities, in which data from all kinds of systems – transport, healthcare, government – can be connected to each other and analysed so that someone (who? Big Brother?) can make ‘smarter decisions.’ The recent reports about the surveillance system IBM built in the Philippines for President Duterte – whose war on drugs has already claimed thousands of lives and who can now, thanks to IBM technology, keep a very close eye on everybody on his kill list– should fill everyone with fear about what could potentially happen with those smart cities.
News broke in March that IBM had used almost a million photographs from photo website Flickr to train its facial recognition software. The people in those photographs had not given their consent for this and you could wonder whether they, if asked, would have consented to helping develop software that could ultimately be used to surveil them. In reaction to these reports IBM stated that people had the option to opt out of the dataset. But how, exactly, can people do this when they don’t even know that photographs of their faces are being used to train an algorithm? Not only does IBM’s behaviour conflict with Flickr’s ‘don’t be creepy’-community guidelines, it is a violation of the privacy laws of all the people whose faces wound up in the dataset without their consent.
We are continuously confronted with unethical and sloppy companies
IBM obviously is incapable of learning lessons from the past. Unfortunately, the company is not alone in this regard. Nearly every day we are confronted with numerous examples of the unethical and sloppy ways in which companies treat our data. Election manipulation, data leaks, there is no end to it.
In a time where more and more parts of our lives are digitised (how much can one still do without DigiD, internet banking, and WhatsApp?) and where not only our personal data, but also the facilitation of our right of access to information and our freedom of speech is put in the hands of internet giants, the question of how we can hold these private parties accountable is becoming increasingly urgent.
There also lies an important role for judges in holding companies accountable. Recently, for example, a settlement was negotiated to compensate victims of the deportation in France during the Second World War. The French national train company, SNCF, transported 76.000 Jews to Nazi camps during WWII, of whom only about 3.000 survived. The SNCF did pay compensation for this, but only to the next of kin still living in France, effectively excluding a large group of people. When the train company started to bid on lucrative procurement contracts in multiple states in the US, a strong lobby was started by the next of kin living there. They failed to get legislation in place to exclude SNCF from competing for contracts, so the family members went to court in multiple states, leading to a settlement between the US and France.
France paid 60 million dollars into a fund to compensate the next of kin, in exchange for a halt in further court cases being brought against SNCF in the US. In the meantime, claims have been awarded to families in Canada, Peru and Mexico. In December of 2018, the NS (the Dutch national railway company), which was responsible for transportation to Westerbork during the Second World War, started setting up a similar fund.
Litigation against big tech companies is gaining more and more traction. It would be good if there would soon be a couple of court decisions that force the IBMs of this world to change their behaviour.
Protecting our human rights through the courts
So far, I have mainly talked about privacy, but human rights in the digital age are about much more than privacy and freedom of speech alone. The organisation that I am Director of, the Digital Freedom Fund, supports strategic litigation to protect digital rights in Europe. We finance strategic cases and bring together the different organisations that are fighting for digital rights, so that together, they can take even stronger cases to court and run campaigns. We work under the motto ‘digital rights are human rights.’ With this we try to make clear that all the rights laid down in the Universal Declaration of Human Rights, the European Convention, European Charter, and numerous other human rights treaties, are also applicable in the digital realm.
That also means fundamental socio-economic rights, like our right to schooling, housing, healthcare and social security. For example, the upcoming report from the Special Rapporteur of the United Nations on extreme poverty and human rights looks at the ‘digital welfare state.’ During recent field visits to the United States and the United Kingdom, the Rapporteur took a careful look at the impact that automated decision-making has on the rights of economically vulnerable groups. In America he raised the issue of ‘datafication’ of homeless people and the use of systems that connect the homeless to homeless services (a kind of coordinated access system). Last year, after his visit to the United Kingdom, the Rapporteur criticised the growing automatisation of the social security system and ‘the gradual disappearance of the British post-war welfare state behind websites and algorithms.’
The poor, as the Special Rapporteur warned, are often the testing ground for governments wanting to introduce new technologies. In other words: in the future, we all run the risk of getting a ‘computer says no’ answer when trying to access the social services we are entitled to.
Citizens are being profiled without committing any criminal offence
The Netherlands also takes part in this. The government is using the “Systeem Risico Indicatie” (‘SyRI’, or System for Risk Indication) to prevent the abuse of social benefits and fraud. By combining data from different government services like the UWV (administration for employees’ insurance), the Belastingdienst (tax authorities), and the Sociale Verzekeringsbank (Social Insurance Bank), the system determines who has an increased risk of committing fraud. Please note the word ‘risk’ here: citizens are being profiled without committing any criminal offence, all so Big Brother can keep a closer eye on things. Without being notified, citizens are placed on a list and can be investigated further by the police (or another government institution). For now, the SyRI system is only used in low income neighbourhoods and municipalities that house a large percentage of non-Western immigrants. Seventy-five years later, we again have a situation in the Netherlands where people are placed on a list based on, amongst other things, their ethnicity.
But: fortunately, we still have the courts. Led by PILP (the Public Interest Litigation Project) and “Platform Bescherming Burgerrechten” (Platform for the Protection of Civil Rights), several organisations – together with Maxim Februari, who gave this lecture two years ago – have sued the Dutch government, trying to get the court to rule that SyRI is illegal.
What is possible when we go to court
That brings us to the last topic of today: what is possible? What can we accomplish in court to protect our human rights in the digital age?
Earlier, we looked at the example of the case of Max Schrems, who ensured that exporting our data to the United States based on a flawed treaty is no longer possible. The NGO Digital Rights Ireland did something similar when it challenged the European Directive for data retention that forced internet and telecom providers in Europe to collect all kinds of personal data from users and retain that data for six months to two years. The case started in 2006 and lasted no less than eight years, but their persistence paid off: the Court of Justice of the European Union declared the Directive unlawful in 2014. At the moment, there are multiple cases ongoing at the national level to ensure that EU member states change their legislation based on this now invalid Directive.
The new European privacy law, the GDPR, offers many opportunities to protect digital rights. The Regulation contains multiple provisions that give us, as citizens, more control over who can take what data from us and offers opportunities to fight abuses. Might we soon see cases brought under the GDPR that put a stop to the so-called ‘AdTech-industry’, the companies that bid on your profile and mine in superfast electronic auctions, to then show targeted advertisements on the web pages we visit?
In the future, will we see CEOs in the dock?
Another goal of a court case can be truth finding. And that brings me to my final Godwin: the trial against SS officer Adolf Eichmann in 1961 was not only used by the public prosecutor to get Eichmann convicted for his crimes as an individual, but also to create as complete a record as possible of the Holocaust. The public prosecutor, Gideon Hausner, took 56 days to hear 112 witnesses and present countless documents.
Currently, we rarely see – only by accident or thanks to a whistle-blower – a glimpse of the abuses happening inside the major tech companies. In the future, might we see their CEOs in the dock, in trials that bring to light details of how their companies helped manipulate elections, spy on civilians and develop war technology?
Going to court is not the only way to reach those goals. But it is a powerful tool and a tool that only gains in power when used in combination with other tools for social change: campaigning, lobbying and protesting. Litigation is no panacea, but it is an important part of the puzzle.
And it is important that we use all the means we have available, now, in a time when non-liberal regimes are becoming more and more prominent and they have increasingly easy access to the technical means to – with or without the help of companies – undermine the rule of law.
I quote from the book of human rights researcher Kathryn Sikkink, Evidence for Hope:
‘We live in dangerous times. Never has it been so important that domestic and international human rights advocates and scholars collaborate. We must take action guided by past successes in promoting human rights and based on our best history and social science … Human rights will continue to be a discourse that can mobilize both domestic and international publics.’
I am ready. Will you join me?