When Climate Justice and Digital Rights Collide

By Daniel Simons, 30th March 2020

Protestor at protest holding sign saying 'No Nature No Future'

What do the climate crisis and digital rights have to do with each other? Are there things campaigners working on these two issues can learn from each other?

These were some of the questions explored at Digital Freedom Fund’s recent annual strategy meeting, where I was fortunate to be one of two representatives from an environmental organisation.

It became clear early on in the discussion that the two issues intersect in various ways, both positive and negative.

Climate change, and efforts to combat it, can impact negatively on digital rights. On the other hand, there is also scope for win-wins, and there are ways in which the digital rights community can help tackle the problem.

How Climate Change Threatens Digital Rights

Climate change threatens life as we know it, so it is a threat to digital rights too, albeit in ways that are difficult to foresee.

We are already seeing some direct impacts, such as the recent mobile network outages during Australia’s catastrophic bushfires, or the decision by California’s utilities to implement a series of planned blackouts late last year to avoid sparking wildfires.

Less directly, but more insidiously, governments are turning to surveillance and control technologies to monitor refugees and migrants, who are increasingly on the move as a result of climate change.

Action against climate change, while clearly necessary, may also affect digital rights. Taxes on electricity, for example, may cause power-hungry online service providers to pass the increased cost on to their users, whether in the form of higher bills or greater monetisation of their personal data.

Several participants in the strategy meeting remarked on the roll-out of smart meters, which help households reduce their energy use, and assist grid managers in improving efficiency and integrating renewable energy. But the data collected by smart meters can paint an intimate picture of a person’s household activities and lifestyle. It is therefore necessary that proper privacy safeguards are put in place.

…the data collected by smart meters can paint an intimate picture of a person’s household activities and lifestyle

Activists working on climate issues face a particular risk of interference with their digital rights. They are frequent targets of unjustified surveillance and hacking, both by governments and corporations. Even scientists may fall prey to this, as the 2009 theft of thousands of emails and other documents from the University of East Anglia’s Climatic Research Unit showed.

Enlisting Digital Rights to Fight Climate Change

Last year saw “flygskam” (flight shame) become the latest Nordic word to enter mainstream English usage. Public awareness of how taking a plane contributes to the climate crisis is high. But should we also be experiencing “dataskam” – embarrassment about how our digital habits warm the planet? The IT sector, broadly defined, accounts for more than 2% of global emissions, which is in the same range as aviation.

In 2009, Greenpeace launched the Cool IT Challenge, a three-fold call for companies to provide emissions-saving IT solutions to other sectors, reduce their own carbon footprint by powering themselves with renewable energy, and advocate for strong political action on the climate. Facebook was an early campaign target: in 2011 it “unfriended coal” and became the first major IT corporation to commit to going 100% renewable. Many other leading companies, including Apple and Google, followed suit.

…should we also be experiencing “dataskam” – embarrassment about how our digital habits warm the planet?

Subsequent campaigns have pushed device manufacturers to go renewable, too. How IT corporations power their operations may not be a digital rights issue as such, but the digital rights community can, and I think should, help amplify such demands for the industry (including less well-known players) to go green – just as many employees of the IT companies themselves are speaking up.

Discussions in the strategy meeting uncovered some areas in which climate action and defending digital rights do go hand in hand. Measures to protect privacy can also be a win for the climate. Limiting the volume and duration of personal data storage also reduces power consumption, for example.

Apart from pushing companies to limit collection and storage of such data, digital rights campaigners can educate the public how to tighten existing privacy settings. The “right to repair” is another shared cause that has been championed by both digital rights organisations and environmental ones. Poor reparability of devices disempowers users and causes unnecessary manufacturing emissions, not to mention the growing mountain of e-waste. With the EU announcing new regulations to promote the right to repair, might this be an interesting area for future strategic litigation?

Parallels in Strategic Litigation

A well-attended session at the strategy meeting run by ClientEarth’s Amy Rose examined whether the digital rights field can learn anything from the way the environmental movement has used strategic litigation to combat climate change. It was noted that there are some parallels between the erosion of privacy and climate change: in both cases, the problem affects the public as a whole, leading to a degree of complacency.

As well as this, in both cases, a narrative has been promoted that responsibility to solve the problem lies with individuals, not companies. This narrative has been a useful tool for fossil fuel and IT companies to avoid accountability.

…in both cases, a narrative has been promoted that responsibility to solve the problem lies with individuals, not companies

In the climate arena, legal campaigners have used securities law to put a spotlight on corporations’ role, compelling them to start disclosing their exposure to climate liability as a risk to their investors.

Activists, and even cities and counties, have also tapped into academic research into which companies are the biggest contributors to the anthropogenic greenhouse gases in the atmosphere – the “carbon majors” – to hold these companies accountable for the resulting human rights harms, or for climate-related torts. Might securities law also be a tool to force companies to disclose their possible liability for privacy breaches? Would the concept of “data majors” have any relevance in defending users’ rights?

Continuing the Dialogue

There was a general sense at the end of the strategy meeting that the dialogue should continue. Apart from ensuring that we are not working at cross-purposes on issues such as smart meters, there is scope for joint campaigning between the environmental and digital rights movements. There is also a lot of scope for useful exchange of learning on how to do strategic litigation properly.

Daniel Simons is Senior Legal Counsel Strategic Defence at Greenpeace International.

Image by Markus Spiske on Unsplash

Against Mass Surveillance of Air Passengers

By Bijan Moini, 26th March 2020

Passengers walking through an airport

Since 2018, air passengers in the European Union are subject to mass surveillance and algorithmic profiling.

 

Despite being in violation of the Charter of Fundamental Rights of the European Union (CFR), EU Member States collect, store and analyse comprehensive data on air travel.

 

The German Society for Civil Rights (Gesellschaft für Freiheitsrechte, GFF) and the Austrian NGO epicenter.works, with the support of DFF, filed a series of strategic lawsuits aimed at reaching the Court of Justice of the European Union (CJEU). A German court has now referred several cases to the CJEU, bringing us closer to toppling the directive that provides the legal basis for mass surveillance of flight passengers.

 

We are used to being treated differently when traveling by air compared to journeys by, say, train. Flying involves identity checks, manual body searches or even full body scans, and sometimes interviews. Many of the precautionary measures taken at airports today were first introduced by the US in the aftermath of the 9/11 terrorist attacks. However, the measures visible to us as air passengers are only the tip of the iceberg.

 

…the measures visible to us as air passengers are only the tip of the iceberg

The collection of data on European air travel presents a similarly severe interference in our fundamental rights.

 

For all passengers of flights to or from the EU, a Passenger Name Record (PNR) is transferred from air carriers to the national police. The basis for all of this is the European PNR Directive (Directive 2016/681). A PNR contains up to 20 data items, including the date of birth, details of accompanying persons as well as payment information or the IP address used for online check-in.

 

Together with the information on the flight itself, the PNR offers a detailed picture of the passenger. Sound like mass surveillance to you? It sure is.

How Did it Come to This?
 
The US first introduced mass retention of passenger data in November 2001. Other countries, including Canada and Australia, planned to introduce similar measures. Despite some fierce opposition in the EU Parliament based on data protection concerns, the EU concluded PNR agreements with these countries between 2011 and 2014.
 

This put mass surveillance of flight passengers on a legal footing. In the case of the PNR agreement between the EU and Canada, however, the CJEU in July 2017 confirmed that the provisions violated European fundamental rights.

In the course of negotiating these agreements, the EU repeatedly discussed introducing its own data retention system for airline travel data.  The PNR Directive was adopted in 2016 and has since been gradually implemented in EU Member States.

 

Even though the Directive envisages data retention only for flights entering and exiting the EU, all Member States have agreed to an extension clause, according to which data for intra-European flights will be stored, as well.

 

Dangerous Data Collection

 

Under the PNR Directive, European government agencies can store PNR data for six months in an identifiable manner (i.e. linked to the real name of the passenger) and then another four and a half years in pseudonymised form.

 

They automatically check the passenger data against databases on, for example, wanted persons and stolen passports. But more importantly, they can apply algorithms, so-called pre-determined criteria, to the data sets.

 

These algorithms are supposed to identify “unknown” suspects by, for instance, extrapolating patterns and conclusions from an individual’s flight behavior and other data to ascertain whether they are a suspected or potential offender.

 

A person for whom such a “hit” occurs can be the target of further clandestine police measures or be interrogated at the airport or even, depending on the national law, detained.

 

It is widely disputed whether the analysis of such data is an effective means to investigate or prevent terrorism and other serious crime

It is widely disputed whether the analysis of such data is an effective means to investigate or prevent terrorism and other serious crime. Rather, the sheer volume can make analysis more difficult.

 

In Germany, the Federal Government itself assumes a success rate of only 0.1%, meaning that 99.9 % of all air passengers – about 169,830,000 people, according to the Government’s forecasts – will be unnecessarily subjected to the processing of sensitive data. And this does not even account for the “false positives” within the 0.1 % (i.e. those who have been flagged by the system in error as a person of interest).

 

Violation of the Charter of Fundamental Rights

 

In view of the large volume of air traffic today, the processing of passenger data is tantamount to mass surveillance.

 

It violates the Charter of Fundamental Rights as it disregards both the right to respect for private and family life (Article 7), as well as the right to the protection of personal data (Article 8). The CJEU’s opinion on the EU-Canada PNR agreement clearly confirms this assessment, as does the opinion of the European Data Protection Supervisor.

 

In view of the large volume of air traffic today, the processing of passenger data is tantamount to mass surveillance

While we are certain to have the stronger legal arguments on our side, we depended on a national court to refer a case to the CJEU and ask it for a preliminary ruling on the validity of the PNR Directive under EU law. We therefore started lawsuits in both Germany and Austria that tackle the respective national law implementing the Directive.

 

In Germany, GFF took a two-track legal approach. On the one hand, we took administrative action against the Federal Criminal Police Office, which processes passenger data in Germany, and demanded that they delete our plaintiffs’ data. On the other hand, we filed civil lawsuits against the airlines transmitting the data records.

 

Towards the EU’s Highest Court

 

In January 2020, the Local Court of Cologne, Germany, submitted to the CJEU the question of whether the PNR Directive violates fundamental rights. Thus we have reached a major milestone in our strategic litigation.

 

We now hope that the CJEU will, consistent with its own jurisprudence, declare the Directive void and thereby invalidate the legal basis of all national PNR laws.

 

A judgment is expected to be rendered no sooner than by the end of 2021. This would be a huge success for fundamental human rights, as it would set boundaries for mass surveillance of all our movements (extending the PNR directive to trains, buses and ferries is already being discussed) and, maybe even more importantly, to the use of algorithms when assessing if an otherwise unsuspicious person is to be considered dangerous and to be treated as such (more often than not, wrongfully so).

 

It would also strengthen European civil society by showing that we can use cross-border strategic litigation as a means to improving the legal protections of hundreds of millions of people at the same time.

Bijan Moini is a litigator with Gesellschaft für Freiheitsrechte and in charge of its digital rights cases.
Image by Toby Yang on Unsplash

Protecting Freedom of Thought in the Digital Age

By Susie Alegre, 23rd March 2020

Emoticons and "like" buttons

My work as an independent lawyer focuses on the potential for technology to interfere with our right to freedom of thought, as protected in international law instruments including Article 9 of the ECHR and Article 18 of the ICCPR.

According to these instruments, many rights, like the right to private life and the right to freedom of expression, can be limited in certain circumstances to protect the rights of others or in the interests of public order.

The right to freedom of thought, on the other hand, is absolute under international human rights law. This means that where there has been an interference with our right to freedom of thought, as it relates to the thoughts and feelings we experience in our inner world, such an interference can never be justified. 

Despite this strong level of treaty-based protection, however, the right to freedom of thought has rarely been invoked in the courts. Many legal scholars and commentators have assumed that this is because, in fact, no person or government could ever get inside our minds

Many legal scholars and commentators have assumed that this is because, in fact, no person or government could ever get inside our minds

But when I first read about the Cambridge Analytica scandal and the use of behavioural micro-targeting to produce individual psychological profiling of voters so that they can be targeted with tailored adverts that press their unique psychological buttons, it seemed clear to me that the idea that no one could ever interfere with our minds was outdated.

There are three main planks to the right to freedom of thought:

  • The right not to reveal our thoughts.
  • The right not to have our thoughts manipulated.
  • The right not to be punished for our thoughts.

All three are potentially relevant in the digital age, where algorithmic processing of Big Data is relied on to profile and understand individual’s thought processes in real time for the purpose of targeting them with tailored advertising and other content.

Profiling seeks to infer how we think and feel in real time based on large swathes of data, including our online activity. Research on Facebook claimed that the social media platform could know you better than your family from interpreting your “likes”. In this way, interpretation of your social media activity gives a unique insight into your mind.

In another experiment, researchers showed that altering the order of Facebook feeds could manipulate users’ mood. The tailored way that information is delivered to us could change the way we think and feel in a very real way.

In another experiment, researchers showed that altering the order of Facebook feeds could manipulate users’ mood

But it is not only the tracking and manipulation of our thoughts and feelings that is of concern. The way that information could be used against us is equally worrying. Inferences about our personalities and moods drawn from big data can form the basis for decisions that will fundamentally change our life chances whether in limiting access to financial services, automated hiring processes or risk assessments in the criminal justice system.

Thanks to a DFF pre-litigation research grant, I am currently exploring the ways in which technology and artificial intelligence can be used to try to cross the boundaries into our inner world and to identify where arguments based on the right to freedom of thought could help to challenge these practices before the courts in the UK, Ireland and Spain.

The research is currently in the first phase, gathering relevant reports and examples of practices that could be considered to have implications for freedom of thought. I would very much welcome suggestions for reading and contacts from colleagues working in digital rights in any jurisdiction who would be interested in discussing ways that freedom of thought could be relevant to the work they are doing.

Inferences about our personalities and moods drawn from big data can form the basis for decisions that will fundamentally change our life chances

DFF’s Strategy Meeting 2020 offered a unique opportunity to share my initial research on the right to freedom of thought and to get feedback and ideas from an incomparable range of digital rights activists and experts from across Europe and beyond. The chance to test and discuss ideas, and to gain insights into the work of others, opened new and exciting avenues of inquiry that will feed into my research. It also helped me to reach out to new partners. I left Berlin energised and with a long list of people, organisations and topics to follow up with – DFF’s Strategy Meeting really does make things happen.

But the Strategy Meeting was just the beginning – I would love to hear from people and organisations who are already working on these issues or would like to cooperate on future work to keep the momentum from Berlin going.

Susie Alegre is an international human rights lawyer, author and barrister at Doughty Street Chambers specialising in the right to freedom of thought and technology.

Image by George Pagan III on Unsplash