Why Privacy – Not Surveillance – is Security

By Maria Smith, 12th August 2020

A surveillance camera mounted on a navy wall overlooks a security box

As governments across the globe implement measures in response to coronavirus, many turn to technology.

Techno-solutionism is not new; its scale, however, is unprecedented. From wearable contact tracing devices in Singapore to mandatory selfies in Poland, surveillance-laden responses to the pandemic have ushered in vast collection of personal information, including data on location, biometrics, credit card transactions, home addresses, and occupations.

The powerful rhetoric from governments and optimistic promises from companies assume—or want us to assume—that surveillance guarantees security. But, for many people, privacy means security.

Today we see that the same companies that have reportedly sold spyware used to target human rights activists are in the business of COVID-19 contact tracing, and even the most well-intentioned jurisdictions risk prioritising short-term power flexes over the long-term security of all their residents.

The reactionary turn to technology and big data—typically in tandem—during times of crisis rests on the premise that the more that governments know about people, the better off the people are. But that begs the question: what people?

The reactionary turn to technology and big data—typically in tandem—during times of crisis rests on the premise that the more that governments know about people, the better off the people are.

When surveillance is carried out on a mass scale, the use of the data collected often involves the profiling and othering of certain groups and communities. At the same time, targeted surveillance, such as that of welfare recipients or protesters, often leads to over-surveillance of certain groups and communities. So it is never about knowing everything about everybody but about exerting control over certain groups more than the general population. Thus, surveillance often serves as a tool far beyond its alleged purpose.

Surveillance, at its core, is about power. The most low-tech surveillance practices, from map-making to fingerprinting, emerged from Europeans’ desire to control the populations they colonised. In India, for example, data on caste, religion, and age collected by the British were used to stoke religious tensions and solidify the caste system.

Germany’s census and citizen registration took on new missions when IBM Germany formed a technological alliance with Nazi Germany during WWII. When German occupation forces needed to deliver a quota of Dutch or Czech Jews to Nazi leaders, they located them through census data processed by an IBM machine specifically designed for the function.

When German occupation forces needed to deliver a quota of Dutch or Czech Jews to Nazi leaders, they located them through census data processed by an IBM machine

The first national census in the United States, in 1790, inquired about the number of free white males, free white females, other free people, and slaves in a household. Early census enumeration, scholar Simone Browne explains, fixed individuals “within a certain time and a particular space, making the census a technology that renders a population legible in [racialising] as well as gendering ways.” Even something as seemingly innocuous as a population count can be used to legitimise discriminatory ends, such as de-humanising people of colour.

Even something as seemingly innocuous as a population count can be used to legitimise discriminatory ends

The Chinese government has made mass surveillance central to repression. Human Rights Watch reports that some 13 million Muslims live in a nightmarish reality in China’s Xinjiang region, home to a combination of in-person and online surveillance, video cameras with facial recognition technology, and electronic checkpoints. The Chinese government uses information collected to determine who is detained for torturous “re-education” programs.

Historically, marginalised populations are no strangers to the outsized dangers of targeted surveillance.

The Chinese government has made mass surveillance central to repression

Abusive or extractive surveillance doesn’t necessarily begin as such, however, and the harms to particular communities are not always readily apparent.   

From forced “digital strip searches” exposing rape victims’ personal information in inappropriate ways, to “digital welfare dystopias” in which technologies are used to target, surveil, and punish the poor, modern surveillance infrastructures endanger the social, political, economic, and physical security of the least powerful communities.

At the Mexico-US border, for example, US Customs and Border Protection agents check entrants’ Facebook and Twitter profiles, “open[ing] the door to discrimination and abuse, and threaten[ing] freedom of speech and association.” The use of facial recognition technologies, a touchstone of punitive policing across the globe that has proven to be deeply biased, has faced recent backlash as a tool to harm racialised and marginalised groups.  

At the Mexico-US border, for example, US Customs and Border Protection agents check entrants’ Facebook and Twitter profiles

In her book, The Age of Surveillance Capitalism, Shoshana Zuboff charts the course of the expansion of surveillance over US citizens and non-citizens alike in reaction to 9/11 and in the name of security. She argues that this atmosphere, by eroding concern for privacy, allowed tech companies to emerge as powerhouses while state actors, often working together with tech companies, deployed unparalleled intrusive measures in the name of security.

Today, governments around the world are operating with expanded emergency powers to fight coronavirus, and many scholars and journalists have expressed concern that these temporary measures could become permanent.

Already, instances of officially sanctioned targeting of marginalised populations are spreading

Already, instances of officially sanctioned targeting of marginalised populations are spreading. LGBTQ people face disproportionate danger when information-gathering tactics expand, a reality of non-pandemic circumstances that has only worsened over the last few months. Women and children face especially high risk when health data is politicised, and, according to emerging studies, COVID-related discrimination disproportionately targets non-white people.      

Too often, the assessment of risks and benefits of surveillance occurs behind closed doors and without the input of the communities that have the most to lose. Or it occurs too late. A handful of European countries have recently reigned in surveillance measures initially deployed in their rush to react to coronavirus.

Norway halted the use of its COVID-19 contact tracing app, “Smittestopp,” which was used to track citizens’ locations by frequently uploading GPS coordinates to a central server. Its decision came just hours before Amnesty International released the results of its investigation into some of the most invasive contact tracing apps—among them “Smittestopp”—which have been found to imperil the privacy, and security, of hundreds of thousands of people.

Courts in France, also amid sweeping privacy concerns, temporarily banned the government’s COVID-19 surveillance drones. In the UK, the government has admitted to breaking the law with regards to privacy in its reckless deployment of a test-and-trace programme. 

In other instances, an informed assessment of risks and benefits may not take place at all. “Governments are legitimising tools of oppression as tools of public health,” explains Raman Jit Singh Chima, Asia Policy Director and Senior International Counsel at Access Now, a digital rights NGO. Russia’s latest app is reported to track migrant workers’ biometric data—and that’s just the starting point. A member of Russia’s parliament hinted at how the app might be used, explaining that “it will solve all the issues with migrants in Russia. We should have done it [a] long time ago, but because of political correctness and tolerance missed out on solutions to many problems.”   

“It will solve all the issues with migrants in Russia. We should have done it [a] long time ago, but because of political correctness and tolerance missed out on solutions to many problems”   

For many communities, privacy is security. Losing sight of this during any circumstances would be dangerous. Now, at the height of state and corporate power, it could put those who are already vulnerable and face discrimination at even greater risk.

 

Photo by Igor Starkov from Pexels

The Facebook Ruling that Set a Competition Law Precedent

By Maria Smith, 30th July 2020

Social media icons with 'Facebook' icon in focus

This article was co-authored by Maria Smith and Tatum Millet.

Late last month, Germany’s highest court ruled that Facebook had abused its market dominance to illegally harvest data about its users. The ruling upholds an earlier decision by the country’s antitrust watchdog, the Bundeskartellamt

The case presents an example of the role competition law can play in holding corporate actors accountable for business practices that violate digital rights.

Facebook’s terms and conditions are written to allow the company to collect almost unlimited amounts of user data from Facebook-owned, as well as third party, websites. German regulators successfully used a novel antitrust argument to show that Facebook had pressured users into making an all-or-nothing choice, forcing them to either submit to unlimited data collection or not use the site at all.

The court determined that Facebook occupies a dominant position within the social media market and, for many users, giving up Facebook means giving up their online connections.

The court determined that Facebook occupies a dominant position within the social media market and, for many users, giving up Facebook means giving up their online connections

By taking advantage of its market dominance to push users into consenting to invasive data collection and combining policies, Facebook violated competition laws meant to protect consumers from exploitative abuse. The court’s interpretation of personal data collection as a form of “payment” for using Facebook is an important development in reframing competition law concepts to reflect the realities of digital markets.

Andreas Mundt, Germany’s top antitrust enforcer, applauded the court’s decision. “Data are an essential factor for economic strength, and a decisive criterion in assessing online market power,” Mr Mundt said. “Whenever data are collected and used in an unlawful way, it must be possible to intervene under antitrust law to avoid an abuse of market power.” 

Facebook must now alter its practices in Germany by allowing users to block the company from combining their data on Facebook with data about their activities on other websites and apps. Facebook’s response to the ruling? The company said that it “will continue to defend [its] position that there is no antitrust abuse.”

Facebook’s response to the ruling? The company said that it “will continue to defend [its] position that there is no antitrust abuse.”

The practice struck down by German authorities –– combining users’ data from across millions of websites and apps –– is the very practice that has allowed Facebook to balloon into the advertising giant it is today. This case demonstrates how Facebook wielded its dominance in the social media market to deprive users of the ability to meaningfully consent to personal data processing.

The company’s unique access to the personal data of billions of users has allowed them to secure a stronghold over the market for targeted online advertising. As the finding shows, this position has allowed Facebook to exert great power over the digital economy and further stifle competition.   

In June, DFF convened a Competition Law Workshop with digital rights organisations from across Europe and the US to explore how anti-competitive practices could be challenged to defend digital rights. Participants identified instances in which potentially anti-competitive practices are playing out in the digital context, charting possible legal challenges to, among other issues, intermediaries abusing their market dominance.

The group also identified ways to strengthen the regulatory capacity of European bodies. In early June, the European Commission launched two public consultations to seek views on the Digital Services Act package and on a New Competition Tool

After the DFF workshop, a group of participants drafted a response to this consultation, urging the Commission to keep digital rights in focus when analysing the impact of the proposed regulatory tools. These participants note that “large online platforms not only act as economic gatekeepers, but also as ‘fundamental rights’ gatekeepers.”

At a time when personal, social, and political life increasingly plays out across online platforms, it is urgent that we find ways to ensure that regulators have the legal and political mechanisms needed to protect privacy, competition, and human rights. 

Germany has set a pro-competition, pro-consumer precedent. As big tech’s “bully” tactics come under scrutiny, momentum is building behind competition law

Germany has set a pro-competition, pro-consumer precedent. As big tech’s “bully” tactics come under scrutiny, momentum is building behind competition law as regulators look for ways to reign in monopolistic practices.

Maria Smith is a 2L at Harvard Law School and a 2020 summer intern at the Digital Freedom Fund.

Tatum Millet is a 2L at Columbia Law School and a 2020 summer intern at the Digital Freedom Fund.

Image by Nordwood Themes on Unsplash