Privacy or Paycheck: Protecting Workers Against Surveillance

By Tatum Millet, 3rd August 2020

COVID-19 changed how the world “works.” Ordered out of offices, workers under quarantine set up shop at home and, for many, the change will outlast the pandemic. Work-from-home is here to stay. Even workers who are now returning to “normal” office life will not be returning to the same workplaces they left in March.

As offices begin to reopen, employers are instituting safety measures that could permanently change workplace culture. 

The future of work in a post-COVID world is still taking shape, but whether at home or at the office, worker privacy is poised to become a thing of the past. 

As managers fret over maximising the productivity of remote employees, workers are learning to cope with invasive technologies that track their keystrokes and monitor their screens.

Companies reopening for in-person work are rolling out employee monitoring systems using wearable technology, video surveillance, and mobile-tracking apps to contain the spread of the virus.

COVID-19 has increased the surveillance power of employers, and privacy advocates warn that technologies adopted during the pandemic could “normalise” aggressive workplace surveillance for years to come. 

Technologies adopted during the pandemic could “normalise” aggressive workplace surveillance for years to come

Yet, for many workers, particularly workers in the gig economy, constant surveillance has been the norm for years. Examining how the pandemic has further entrenched a system in which work is mediated, monitored, analysed, and optimised by algorithms—and confronting the position of workers within that system—demonstrates that protecting the digital rights of workers requires more than protecting their data. It requires addressing the imbalance of power between employers and the workers trying to scratch a living out of an algorithm. 

The lines between work and non-work have bled together over the past few decades. As platforms and employers increasingly rely on algorithms and monitoring tools to manage workers, widespread “datafication” has transformed the relationship between workers and management so that “an individual employee becomes a score on a manager’s dashboard, an upward or downward trajectory, a traceable history of tasks performed, or a log file on a company’s server.”

This data is valuable to employers, and workers produce it all the time—even while sleeping, workers have productive potential

This data is valuable to employers, and workers produce it all the time—even while sleeping, workers have productive potential. Professor Peter Fleming, writing on contemporary management ideology, warns, “ominously, we are now permanently poised for work.” 

Schemes to algorithmically optimise productivity are affecting workers across economic sectors, from the tech employee instructed to wear a Fitbit 24/7, to the Deliveroo driver rushing to meet fine-grain performance targets and the warehouse worker monitored during bathroom breaks.

Yet, digital surveillance does not impact workers equally. Economically vulnerable workers often have no choice but to consent to invasive surveillance measures. In contrast, workers with bargaining power—often highly skilled workers, or workers with strong union support—have the power to resist. Around the world, as the gig economy continues to break down organisational networks, isolating workers in states of precarious employment, a tiered system of surveillance inequality is taking shape. 

Economically vulnerable workers often have no choice but to consent to invasive surveillance measures

In order to protect the most vulnerable workers from the exploitative practices made possible by digital technology, campaigns to protect the personal data of gig economy workers must be complemented by efforts to secure stronger labour rights.  

Recent efforts by gig economy workers to protect their employment rights demonstrate just how closely the problem of worker surveillance is entwined with broader issues of labour today.

In the UK, Uber drivers sued Uber for improperly categorising them as freelance workers instead of employees with benefits and protection. Yet, without access to Uber’s platform data, drivers cannot calculate their net earnings to ensure compliance with minimum wage protections or launch an appeal if they are fired by an algorithm. So, the drivers turned to the GDPR, and with the support of the App Drivers and Couriers Union, a number of UK Uber drivers filed a lawsuit against Uber in the district court in Amsterdam to demand access to their data, including GPS location, hours logged on, tips, and trip details. 

Providing gig economy workers with stable wages, benefits, and job security would give workers more leverage to organise in opposition to invasive digital tracking technology

The Uber drivers are not just seeking more control over their data: they are seeking more control over their state of employment. As Anton Ekker, the Amsterdam privacy lawyer representing the drivers, said to the Guardian, “this is about the distribution of power.” Providing gig economy workers with stable wages, benefits, and job security would give workers more leverage to organise in opposition to invasive digital tracking technology.

Crucially, worker organisation throughout the gig economy also promises to expand the impact of local and national victories against platform companies, and facilitate strategic coordination so workers can more effectively take advantage of existing legal mechanisms—such as the GDPR—that have the potential to protect their rights.  

As the COVID-19 pandemic threatens to hurl vulnerable workers into an even more precarious economic future, it is also expanding the surveillance power of employers. Protecting the digital rights of workers against this ominous “future of work” in a manner that fully addresses the state of economic inequality in the gig economy calls for more than data safeguards—it calls for empowerment.

Tatum Millet is a 2L at Columbia Law School and a 2020 summer intern at the Digital Freedom Fund.

Photo by Daria Shevtsova from Pexels

The Facebook Ruling that Set a Competition Law Precedent

By Maria Smith, 30th July 2020

Social media icons with 'Facebook' icon in focus

This article was co-authored by Maria Smith and Tatum Millet.

Late last month, Germany’s highest court ruled that Facebook had abused its market dominance to illegally harvest data about its users. The ruling upholds an earlier decision by the country’s antitrust watchdog, the Bundeskartellamt

The case presents an example of the role competition law can play in holding corporate actors accountable for business practices that violate digital rights.

Facebook’s terms and conditions are written to allow the company to collect almost unlimited amounts of user data from Facebook-owned, as well as third party, websites. German regulators successfully used a novel antitrust argument to show that Facebook had pressured users into making an all-or-nothing choice, forcing them to either submit to unlimited data collection or not use the site at all.

The court determined that Facebook occupies a dominant position within the social media market and, for many users, giving up Facebook means giving up their online connections.

The court determined that Facebook occupies a dominant position within the social media market and, for many users, giving up Facebook means giving up their online connections

By taking advantage of its market dominance to push users into consenting to invasive data collection and combining policies, Facebook violated competition laws meant to protect consumers from exploitative abuse. The court’s interpretation of personal data collection as a form of “payment” for using Facebook is an important development in reframing competition law concepts to reflect the realities of digital markets.

Andreas Mundt, Germany’s top antitrust enforcer, applauded the court’s decision. “Data are an essential factor for economic strength, and a decisive criterion in assessing online market power,” Mr Mundt said. “Whenever data are collected and used in an unlawful way, it must be possible to intervene under antitrust law to avoid an abuse of market power.” 

Facebook must now alter its practices in Germany by allowing users to block the company from combining their data on Facebook with data about their activities on other websites and apps. Facebook’s response to the ruling? The company said that it “will continue to defend [its] position that there is no antitrust abuse.”

Facebook’s response to the ruling? The company said that it “will continue to defend [its] position that there is no antitrust abuse.”

The practice struck down by German authorities –– combining users’ data from across millions of websites and apps –– is the very practice that has allowed Facebook to balloon into the advertising giant it is today. This case demonstrates how Facebook wielded its dominance in the social media market to deprive users of the ability to meaningfully consent to personal data processing.

The company’s unique access to the personal data of billions of users has allowed them to secure a stronghold over the market for targeted online advertising. As the finding shows, this position has allowed Facebook to exert great power over the digital economy and further stifle competition.   

In June, DFF convened a Competition Law Workshop with digital rights organisations from across Europe and the US to explore how anti-competitive practices could be challenged to defend digital rights. Participants identified instances in which potentially anti-competitive practices are playing out in the digital context, charting possible legal challenges to, among other issues, intermediaries abusing their market dominance.

The group also identified ways to strengthen the regulatory capacity of European bodies. In early June, the European Commission launched two public consultations to seek views on the Digital Services Act package and on a New Competition Tool

After the DFF workshop, a group of participants drafted a response to this consultation, urging the Commission to keep digital rights in focus when analysing the impact of the proposed regulatory tools. These participants note that “large online platforms not only act as economic gatekeepers, but also as ‘fundamental rights’ gatekeepers.”

At a time when personal, social, and political life increasingly plays out across online platforms, it is urgent that we find ways to ensure that regulators have the legal and political mechanisms needed to protect privacy, competition, and human rights. 

Germany has set a pro-competition, pro-consumer precedent. As big tech’s “bully” tactics come under scrutiny, momentum is building behind competition law

Germany has set a pro-competition, pro-consumer precedent. As big tech’s “bully” tactics come under scrutiny, momentum is building behind competition law as regulators look for ways to reign in monopolistic practices.

Maria Smith is a 2L at Harvard Law School and a 2020 summer intern at the Digital Freedom Fund.

Tatum Millet is a 2L at Columbia Law School and a 2020 summer intern at the Digital Freedom Fund.

Image by Nordwood Themes on Unsplash

Protecting Children’s Digital Rights in Schools

By Nani Jansen Reventlow, 30th July 2020

Children sitting in a classroom

This post was written in association with The State of Data 2020, an event organised by defenddigitalme, who advocate for children’s privacy in data and digital rights. 

In recent years, cases have been decided across Europe that raise serious issues about the digital rights of children in schools.

Unsurprisingly, schools are not immune to the increased digitisation sweeping through our societies. Around Europe, many schools have already begun to introduce tools that collect and process students’ biometrics, such as their faces and fingerprints, as well as other personal, sensitive data.

It sounds worrying – and it is. Luckily, however, children’s personal data has special protection under the GDPR.

Processing children’s personal data in the education context isn’t entirely off limits under this regulation. It’s acceptable for schools to use such data if there is deemed a good reason for it: for example, if it’s in the interest of the child or the public. This could include monitoring immunisations, keeping attendance, or developing online learning platforms that further the education of the child.

But there are, of course, caveats. For one, the GDPR sets higher standards of justification when it comes to processing people’s unique, biometric data, given that it is an extremely sensitive and invasive form of data collection. Also, if schools don’t ensure their students’ data are completely secure, or if they collect more data than they actually need, they can end up breaching GDPR.

As we’ll see below, real life examples of such breaches have already begun to surface.

Taking Attendance through Facial Recognition

First we’ll go to Sweden, where the Secondary Education Board in the municipality of Skellefteå began using facial recognition in 2018 to monitor attendance at a secondary school.

In this case, 22 students were monitored by cameras as they entered the classroom every day. The purpose of this experiment was to see if automation could save time in registering students’ attendance. The school hoped that the time saved could then be used for teaching. The students’ facial images, as well as their first and last names, were processed and stored, with consent from their guardians.

The students’ facial images, as well as their first and last names, were processed and stored, with consent from their guardians.

Similarly, in France in 2019, a regional authority launched an experimental security project in two high schools –– one in Marseille and one in Nice –– installing facial recognition gates at the entrance of the schools. The gates identified students using facial images, and also scanned for unidentified visitors. The students whose data would be processed had given their consent.

The outcome? Both the Swedish and French Data Protection Authorities found that using facial recognition technology to monitor students breached the GDPR, specifically Article 5 and Article 9.

Under Article 5, personal data processing must be “adequate, relevant, and limited” to what is necessary to carry out a specific purpose.

Article 9, on the other hand, makes clear that processing biometric data is permissible only under strict conditions. One of those conditions is “explicit consent” – something that both schools used to justify their experiments. However, due to the imbalance of power between the data subject –– the students –– and the data controller –– the school –– both data protection authorities rejected the fact that consent was given freely.

However, due to the imbalance of power between the the students and the school, both data protection authorities rejected the fact that consent was given freely

As well as this, both authorities found that facial recognition technology infringed upon the personal integrity and data protection rights of students to a degree that was disproportionate to the purposes being pursued.

The Swedish data protection authority actually went further, and found that, given how little we still know about the potential risks and consequences of deploying facial recognition, the school should have consulted with the authority for advice before using it.

Fingerprint Scanning for Student Meals

Another key case occurred in Poland, and concerned not facial recognition, but fingerprint collection.

Starting in 2015, Primary School No. 2 in Gdansk processed the fingerprints of 680 children as a method of verifying meal payment at the school canteen.

Once again, the school had obtained consent from the guardians of students whose biometric data was being processed. Four children opted out, and paid using an alternative method. However, these students had to wait at the back of the lunch queue until all the students using biometric data verification had passed through the canteen.

Just as the Swedish and French schools had done, the Polish school argued that it had obtained “explicit consent.” But, once again, the Polish data protection authority rejected this point. This time, the authority found that, because students who didn’t consent had to wait at the back of the line, they were treating students who refused to consent unequally – thereby placing pressure on students to consent.

The authority found that, because students who didn’t consent had to wait at the back of the line, they were treating students who refused to consent unequally

As well as this, the authority found, once again, that biometric data collection wasn’t necessary in this case, given that there were alternative, less invasive methods for students to verify their meal payments.

Unsecure Apps for Parents, Students and Teachers

Moving onto Norway, the Education Agency of the Municipality of Oslo launched an app in 2019 that allowed parents and students to send messages to school staff.

Soon after the app launched, Aftenposten, a widely read Norwegian news outlet, broke the news that the app contained a major security vulnerability. Apparently anyone who could log in to the portal could potentially gain access to every communication that had been sent through the app – as well as the personal information of 63,000 school children in Oslo.

Anyone who could log in to the portal could potentially gain access to every communication that had been sent through the app – as well as the personal information of 63,000 school children

The Norwegian data protection authority found that the Education Agency had breached the GDPR – Article 5.1(f) – by launching an app without properly testing it to ensure the information on it would be stored securely.

The authority also found that the Education Agency breached Article 32 of the GDPR, which requires data controllers to ensure a level of security appropriate to the risk. In light of the special vulnerability of children, the Education Agency did not take sufficient measures to ensure the confidentiality and integrity of its messaging system.

As a result of the findings, the data protection authority imposed a fine on the Education Agency. It also imposed a similar fine on the Municipality of Bergen, where data security insufficiencies exposed the personal data of students by allowing unauthorised users to access the school’s administrative system, as well as the digital learning platform where students’ classwork, evaluations, and personal data were stored.

Recurring Patterns

In considering whether a school violated the data protection rights of students, the data protection authorities of Sweden, France, Poland, and Norway all drew attention to the relationship between a school and its students. Schools are in a position of authority when they monitor students’ data, and students are in a position of dependency. The power imbalance in this relationship will most often invalidate consent as a lawful condition for biometric data processing.

As well as this, the particularly sensitive and vulnerable nature of children’s data processed in a school context –– often relating to issues such as health, identity and development – requires schools to protect students’ personal data with particularly secure technical measures.

Educational authorities carry out many tasks that require collecting personal data from students. Technologies, including biometric technologies like facial recognition and fingerprinting, present new opportunities for schools to expedite processes like attendance monitoring, school security, and lunch payment.

But the recent GDPR fines imposed on schools for implementing such programs underscore that schools cannot sacrifice the privacy of its students for expediency

But the recent GDPR fines imposed on schools for implementing such programs underscore that schools cannot sacrifice the privacy of its students for expediency.

Whatever measures schools adopt, they must safeguard children’s rights and be necessary and proportionate to the purpose pursued. If there are alternative measures that do not require the processing of sensitive personal data, and those measures can achieve the same results, then the processing of sensitive personal data is not likely to be justified under the GDPR.