Why Privacy – Not Surveillance – is Security

By Maria Smith, 12th August 2020

A surveillance camera mounted on a navy wall overlooks a security box

As governments across the globe implement measures in response to coronavirus, many turn to technology.

Techno-solutionism is not new; its scale, however, is unprecedented. From wearable contact tracing devices in Singapore to mandatory selfies in Poland, surveillance-laden responses to the pandemic have ushered in vast collection of personal information, including data on location, biometrics, credit card transactions, home addresses, and occupations.

The powerful rhetoric from governments and optimistic promises from companies assume—or want us to assume—that surveillance guarantees security. But, for many people, privacy means security.

Today we see that the same companies that have reportedly sold spyware used to target human rights activists are in the business of COVID-19 contact tracing, and even the most well-intentioned jurisdictions risk prioritising short-term power flexes over the long-term security of all their residents.

The reactionary turn to technology and big data—typically in tandem—during times of crisis rests on the premise that the more that governments know about people, the better off the people are. But that begs the question: what people?

The reactionary turn to technology and big data—typically in tandem—during times of crisis rests on the premise that the more that governments know about people, the better off the people are.

When surveillance is carried out on a mass scale, the use of the data collected often involves the profiling and othering of certain groups and communities. At the same time, targeted surveillance, such as that of welfare recipients or protesters, often leads to over-surveillance of certain groups and communities. So it is never about knowing everything about everybody but about exerting control over certain groups more than the general population. Thus, surveillance often serves as a tool far beyond its alleged purpose.

Surveillance, at its core, is about power. The most low-tech surveillance practices, from map-making to fingerprinting, emerged from Europeans’ desire to control the populations they colonised. In India, for example, data on caste, religion, and age collected by the British were used to stoke religious tensions and solidify the caste system.

Germany’s census and citizen registration took on new missions when IBM Germany formed a technological alliance with Nazi Germany during WWII. When German occupation forces needed to deliver a quota of Dutch or Czech Jews to Nazi leaders, they located them through census data processed by an IBM machine specifically designed for the function.

When German occupation forces needed to deliver a quota of Dutch or Czech Jews to Nazi leaders, they located them through census data processed by an IBM machine

The first national census in the United States, in 1790, inquired about the number of free white males, free white females, other free people, and slaves in a household. Early census enumeration, scholar Simone Browne explains, fixed individuals “within a certain time and a particular space, making the census a technology that renders a population legible in [racialising] as well as gendering ways.” Even something as seemingly innocuous as a population count can be used to legitimise discriminatory ends, such as de-humanising people of colour.

Even something as seemingly innocuous as a population count can be used to legitimise discriminatory ends

The Chinese government has made mass surveillance central to repression. Human Rights Watch reports that some 13 million Muslims live in a nightmarish reality in China’s Xinjiang region, home to a combination of in-person and online surveillance, video cameras with facial recognition technology, and electronic checkpoints. The Chinese government uses information collected to determine who is detained for torturous “re-education” programs.

Historically, marginalised populations are no strangers to the outsized dangers of targeted surveillance.

The Chinese government has made mass surveillance central to repression

Abusive or extractive surveillance doesn’t necessarily begin as such, however, and the harms to particular communities are not always readily apparent.   

From forced “digital strip searches” exposing rape victims’ personal information in inappropriate ways, to “digital welfare dystopias” in which technologies are used to target, surveil, and punish the poor, modern surveillance infrastructures endanger the social, political, economic, and physical security of the least powerful communities.

At the Mexico-US border, for example, US Customs and Border Protection agents check entrants’ Facebook and Twitter profiles, “open[ing] the door to discrimination and abuse, and threaten[ing] freedom of speech and association.” The use of facial recognition technologies, a touchstone of punitive policing across the globe that has proven to be deeply biased, has faced recent backlash as a tool to harm racialised and marginalised groups.  

At the Mexico-US border, for example, US Customs and Border Protection agents check entrants’ Facebook and Twitter profiles

In her book, The Age of Surveillance Capitalism, Shoshana Zuboff charts the course of the expansion of surveillance over US citizens and non-citizens alike in reaction to 9/11 and in the name of security. She argues that this atmosphere, by eroding concern for privacy, allowed tech companies to emerge as powerhouses while state actors, often working together with tech companies, deployed unparalleled intrusive measures in the name of security.

Today, governments around the world are operating with expanded emergency powers to fight coronavirus, and many scholars and journalists have expressed concern that these temporary measures could become permanent.

Already, instances of officially sanctioned targeting of marginalised populations are spreading

Already, instances of officially sanctioned targeting of marginalised populations are spreading. LGBTQ people face disproportionate danger when information-gathering tactics expand, a reality of non-pandemic circumstances that has only worsened over the last few months. Women and children face especially high risk when health data is politicised, and, according to emerging studies, COVID-related discrimination disproportionately targets non-white people.      

Too often, the assessment of risks and benefits of surveillance occurs behind closed doors and without the input of the communities that have the most to lose. Or it occurs too late. A handful of European countries have recently reigned in surveillance measures initially deployed in their rush to react to coronavirus.

Norway halted the use of its COVID-19 contact tracing app, “Smittestopp,” which was used to track citizens’ locations by frequently uploading GPS coordinates to a central server. Its decision came just hours before Amnesty International released the results of its investigation into some of the most invasive contact tracing apps—among them “Smittestopp”—which have been found to imperil the privacy, and security, of hundreds of thousands of people.

Courts in France, also amid sweeping privacy concerns, temporarily banned the government’s COVID-19 surveillance drones. In the UK, the government has admitted to breaking the law with regards to privacy in its reckless deployment of a test-and-trace programme. 

In other instances, an informed assessment of risks and benefits may not take place at all. “Governments are legitimising tools of oppression as tools of public health,” explains Raman Jit Singh Chima, Asia Policy Director and Senior International Counsel at Access Now, a digital rights NGO. Russia’s latest app is reported to track migrant workers’ biometric data—and that’s just the starting point. A member of Russia’s parliament hinted at how the app might be used, explaining that “it will solve all the issues with migrants in Russia. We should have done it [a] long time ago, but because of political correctness and tolerance missed out on solutions to many problems.”   

“It will solve all the issues with migrants in Russia. We should have done it [a] long time ago, but because of political correctness and tolerance missed out on solutions to many problems”   

For many communities, privacy is security. Losing sight of this during any circumstances would be dangerous. Now, at the height of state and corporate power, it could put those who are already vulnerable and face discrimination at even greater risk.

 

Photo by Igor Starkov from Pexels

Privacy or Paycheck: Protecting Workers Against Surveillance

By Tatum Millet, 3rd August 2020

COVID-19 changed how the world “works.” Ordered out of offices, workers under quarantine set up shop at home and, for many, the change will outlast the pandemic. Work-from-home is here to stay. Even workers who are now returning to “normal” office life will not be returning to the same workplaces they left in March.

As offices begin to reopen, employers are instituting safety measures that could permanently change workplace culture. 

The future of work in a post-COVID world is still taking shape, but whether at home or at the office, worker privacy is poised to become a thing of the past. 

As managers fret over maximising the productivity of remote employees, workers are learning to cope with invasive technologies that track their keystrokes and monitor their screens.

Companies reopening for in-person work are rolling out employee monitoring systems using wearable technology, video surveillance, and mobile-tracking apps to contain the spread of the virus.

COVID-19 has increased the surveillance power of employers, and privacy advocates warn that technologies adopted during the pandemic could “normalise” aggressive workplace surveillance for years to come. 

Technologies adopted during the pandemic could “normalise” aggressive workplace surveillance for years to come

Yet, for many workers, particularly workers in the gig economy, constant surveillance has been the norm for years. Examining how the pandemic has further entrenched a system in which work is mediated, monitored, analysed, and optimised by algorithms—and confronting the position of workers within that system—demonstrates that protecting the digital rights of workers requires more than protecting their data. It requires addressing the imbalance of power between employers and the workers trying to scratch a living out of an algorithm. 

The lines between work and non-work have bled together over the past few decades. As platforms and employers increasingly rely on algorithms and monitoring tools to manage workers, widespread “datafication” has transformed the relationship between workers and management so that “an individual employee becomes a score on a manager’s dashboard, an upward or downward trajectory, a traceable history of tasks performed, or a log file on a company’s server.”

This data is valuable to employers, and workers produce it all the time—even while sleeping, workers have productive potential

This data is valuable to employers, and workers produce it all the time—even while sleeping, workers have productive potential. Professor Peter Fleming, writing on contemporary management ideology, warns, “ominously, we are now permanently poised for work.” 

Schemes to algorithmically optimise productivity are affecting workers across economic sectors, from the tech employee instructed to wear a Fitbit 24/7, to the Deliveroo driver rushing to meet fine-grain performance targets and the warehouse worker monitored during bathroom breaks.

Yet, digital surveillance does not impact workers equally. Economically vulnerable workers often have no choice but to consent to invasive surveillance measures. In contrast, workers with bargaining power—often highly skilled workers, or workers with strong union support—have the power to resist. Around the world, as the gig economy continues to break down organisational networks, isolating workers in states of precarious employment, a tiered system of surveillance inequality is taking shape. 

Economically vulnerable workers often have no choice but to consent to invasive surveillance measures

In order to protect the most vulnerable workers from the exploitative practices made possible by digital technology, campaigns to protect the personal data of gig economy workers must be complemented by efforts to secure stronger labour rights.  

Recent efforts by gig economy workers to protect their employment rights demonstrate just how closely the problem of worker surveillance is entwined with broader issues of labour today.

In the UK, Uber drivers sued Uber for improperly categorising them as freelance workers instead of employees with benefits and protection. Yet, without access to Uber’s platform data, drivers cannot calculate their net earnings to ensure compliance with minimum wage protections or launch an appeal if they are fired by an algorithm. So, the drivers turned to the GDPR, and with the support of the App Drivers and Couriers Union, a number of UK Uber drivers filed a lawsuit against Uber in the district court in Amsterdam to demand access to their data, including GPS location, hours logged on, tips, and trip details. 

Providing gig economy workers with stable wages, benefits, and job security would give workers more leverage to organise in opposition to invasive digital tracking technology

The Uber drivers are not just seeking more control over their data: they are seeking more control over their state of employment. As Anton Ekker, the Amsterdam privacy lawyer representing the drivers, said to the Guardian, “this is about the distribution of power.” Providing gig economy workers with stable wages, benefits, and job security would give workers more leverage to organise in opposition to invasive digital tracking technology.

Crucially, worker organisation throughout the gig economy also promises to expand the impact of local and national victories against platform companies, and facilitate strategic coordination so workers can more effectively take advantage of existing legal mechanisms—such as the GDPR—that have the potential to protect their rights.  

As the COVID-19 pandemic threatens to hurl vulnerable workers into an even more precarious economic future, it is also expanding the surveillance power of employers. Protecting the digital rights of workers against this ominous “future of work” in a manner that fully addresses the state of economic inequality in the gig economy calls for more than data safeguards—it calls for empowerment.

Tatum Millet is a 2L at Columbia Law School and a 2020 summer intern at the Digital Freedom Fund.

Photo by Daria Shevtsova from Pexels

The Facebook Ruling that Set a Competition Law Precedent

By Maria Smith, 30th July 2020

Social media icons with 'Facebook' icon in focus

This article was co-authored by Maria Smith and Tatum Millet.

Late last month, Germany’s highest court ruled that Facebook had abused its market dominance to illegally harvest data about its users. The ruling upholds an earlier decision by the country’s antitrust watchdog, the Bundeskartellamt

The case presents an example of the role competition law can play in holding corporate actors accountable for business practices that violate digital rights.

Facebook’s terms and conditions are written to allow the company to collect almost unlimited amounts of user data from Facebook-owned, as well as third party, websites. German regulators successfully used a novel antitrust argument to show that Facebook had pressured users into making an all-or-nothing choice, forcing them to either submit to unlimited data collection or not use the site at all.

The court determined that Facebook occupies a dominant position within the social media market and, for many users, giving up Facebook means giving up their online connections.

The court determined that Facebook occupies a dominant position within the social media market and, for many users, giving up Facebook means giving up their online connections

By taking advantage of its market dominance to push users into consenting to invasive data collection and combining policies, Facebook violated competition laws meant to protect consumers from exploitative abuse. The court’s interpretation of personal data collection as a form of “payment” for using Facebook is an important development in reframing competition law concepts to reflect the realities of digital markets.

Andreas Mundt, Germany’s top antitrust enforcer, applauded the court’s decision. “Data are an essential factor for economic strength, and a decisive criterion in assessing online market power,” Mr Mundt said. “Whenever data are collected and used in an unlawful way, it must be possible to intervene under antitrust law to avoid an abuse of market power.” 

Facebook must now alter its practices in Germany by allowing users to block the company from combining their data on Facebook with data about their activities on other websites and apps. Facebook’s response to the ruling? The company said that it “will continue to defend [its] position that there is no antitrust abuse.”

Facebook’s response to the ruling? The company said that it “will continue to defend [its] position that there is no antitrust abuse.”

The practice struck down by German authorities –– combining users’ data from across millions of websites and apps –– is the very practice that has allowed Facebook to balloon into the advertising giant it is today. This case demonstrates how Facebook wielded its dominance in the social media market to deprive users of the ability to meaningfully consent to personal data processing.

The company’s unique access to the personal data of billions of users has allowed them to secure a stronghold over the market for targeted online advertising. As the finding shows, this position has allowed Facebook to exert great power over the digital economy and further stifle competition.   

In June, DFF convened a Competition Law Workshop with digital rights organisations from across Europe and the US to explore how anti-competitive practices could be challenged to defend digital rights. Participants identified instances in which potentially anti-competitive practices are playing out in the digital context, charting possible legal challenges to, among other issues, intermediaries abusing their market dominance.

The group also identified ways to strengthen the regulatory capacity of European bodies. In early June, the European Commission launched two public consultations to seek views on the Digital Services Act package and on a New Competition Tool

After the DFF workshop, a group of participants drafted a response to this consultation, urging the Commission to keep digital rights in focus when analysing the impact of the proposed regulatory tools. These participants note that “large online platforms not only act as economic gatekeepers, but also as ‘fundamental rights’ gatekeepers.”

At a time when personal, social, and political life increasingly plays out across online platforms, it is urgent that we find ways to ensure that regulators have the legal and political mechanisms needed to protect privacy, competition, and human rights. 

Germany has set a pro-competition, pro-consumer precedent. As big tech’s “bully” tactics come under scrutiny, momentum is building behind competition law

Germany has set a pro-competition, pro-consumer precedent. As big tech’s “bully” tactics come under scrutiny, momentum is building behind competition law as regulators look for ways to reign in monopolistic practices.

Maria Smith is a 2L at Harvard Law School and a 2020 summer intern at the Digital Freedom Fund.

Tatum Millet is a 2L at Columbia Law School and a 2020 summer intern at the Digital Freedom Fund.

Image by Nordwood Themes on Unsplash