As governments across the globe implement measures in response to coronavirus, many turn to technology.
Techno-solutionism is not new; its scale, however, is unprecedented. From wearable contact tracing devices in Singapore to mandatory selfies in Poland, surveillance-laden responses to the pandemic have ushered in vast collection of personal information, including data on location, biometrics, credit card transactions, home addresses, and occupations.
The powerful rhetoric from governments and optimistic promises from companies assume—or want us to assume—that surveillance guarantees security. But, for many people, privacy means security.
Today we see that the same companies that have reportedly sold spyware used to target human rights activists are in the business of COVID-19 contact tracing, and even the most well-intentioned jurisdictions risk prioritising short-term power flexes over the long-term security of all their residents.
The reactionary turn to technology and big data—typically in tandem—during times of crisis rests on the premise that the more that governments know about people, the better off the people are. But that begs the question: what people?
The reactionary turn to technology and big data—typically in tandem—during times of crisis rests on the premise that the more that governments know about people, the better off the people are.
When surveillance is carried out on a mass scale, the use of the data collected often involves the profiling and othering of certain groups and communities. At the same time, targeted surveillance, such as that of welfare recipients or protesters, often leads to over-surveillance of certain groups and communities. So it is never about knowing everything about everybody but about exerting control over certain groups more than the general population. Thus, surveillance often serves as a tool far beyond its alleged purpose.
Surveillance, at its core, is about power. The most low-tech surveillance practices, from map-making to fingerprinting, emerged from Europeans’ desire to control the populations they colonised. In India, for example, data on caste, religion, and age collected by the British were used to stoke religious tensions and solidify the caste system.
Germany’s census and citizen registration took on new missions when IBM Germany formed a technological alliance with Nazi Germany during WWII. When German occupation forces needed to deliver a quota of Dutch or Czech Jews to Nazi leaders, they located them through census data processed by an IBM machine specifically designed for the function.
When German occupation forces needed to deliver a quota of Dutch or Czech Jews to Nazi leaders, they located them through census data processed by an IBM machine
The first national census in the United States, in 1790, inquired about the number of free white males, free white females, other free people, and slaves in a household. Early census enumeration, scholar Simone Browne explains, fixed individuals “within a certain time and a particular space, making the census a technology that renders a population legible in [racialising] as well as gendering ways.” Even something as seemingly innocuous as a population count can be used to legitimise discriminatory ends, such as de-humanising people of colour.
Even something as seemingly innocuous as a population count can be used to legitimise discriminatory ends
The Chinese government has made mass surveillance central to repression. Human Rights Watch reports that some 13 million Muslims live in a nightmarish reality in China’s Xinjiang region, home to a combination of in-person and online surveillance, video cameras with facial recognition technology, and electronic checkpoints. The Chinese government uses information collected to determine who is detained for torturous “re-education” programs.
Historically, marginalised populations are no strangers to the outsized dangers of targeted surveillance.
The Chinese government has made mass surveillance central to repression
Abusive or extractive surveillance doesn’t necessarily begin as such, however, and the harms to particular communities are not always readily apparent.
From forced “digital strip searches” exposing rape victims’ personal information in inappropriate ways, to “digital welfare dystopias” in which technologies are used to target, surveil, and punish the poor, modern surveillance infrastructures endanger the social, political, economic, and physical security of the least powerful communities.
At the Mexico-US border, for example, US Customs and Border Protection agents check entrants’ Facebook and Twitter profiles, “open[ing] the door to discrimination and abuse, and threaten[ing] freedom of speech and association.” The use of facial recognition technologies, a touchstone of punitive policing across the globe that has proven to be deeply biased, has faced recent backlash as a tool to harm racialised and marginalised groups.
At the Mexico-US border, for example, US Customs and Border Protection agents check entrants’ Facebook and Twitter profiles
In her book, The Age of Surveillance Capitalism, Shoshana Zuboff charts the course of the expansion of surveillance over US citizens and non-citizens alike in reaction to 9/11 and in the name of security. She argues that this atmosphere, by eroding concern for privacy, allowed tech companies to emerge as powerhouses while state actors, often working together with tech companies, deployed unparalleled intrusive measures in the name of security.
Today, governments around the world are operating with expanded emergency powers to fight coronavirus, and many scholars and journalists have expressed concern that these temporary measures could become permanent.
Already, instances of officially sanctioned targeting of marginalised populations are spreading
Already, instances of officially sanctioned targeting of marginalised populations are spreading. LGBTQ people face disproportionate danger when information-gathering tactics expand, a reality of non-pandemic circumstances that has only worsened over the last few months. Women and children face especially high risk when health data is politicised, and, according to emerging studies, COVID-related discrimination disproportionately targets non-white people.
Too often, the assessment of risks and benefits of surveillance occurs behind closed doors and without the input of the communities that have the most to lose. Or it occurs too late. A handful of European countries have recently reigned in surveillance measures initially deployed in their rush to react to coronavirus.
Norway halted the use of its COVID-19 contact tracing app, “Smittestopp,” which was used to track citizens’ locations by frequently uploading GPS coordinates to a central server. Its decision came just hours before Amnesty International released the results of its investigation into some of the most invasive contact tracing apps—among them “Smittestopp”—which have been found to imperil the privacy, and security, of hundreds of thousands of people.
Courts in France, also amid sweeping privacy concerns, temporarily banned the government’s COVID-19 surveillance drones. In the UK, the government has admitted to breaking the law with regards to privacy in its reckless deployment of a test-and-trace programme.
In other instances, an informed assessment of risks and benefits may not take place at all. “Governments are legitimising tools of oppression as tools of public health,” explains Raman Jit Singh Chima, Asia Policy Director and Senior International Counsel at Access Now, a digital rights NGO. Russia’s latest app is reported to track migrant workers’ biometric data—and that’s just the starting point. A member of Russia’s parliament hinted at how the app might be used, explaining that “it will solve all the issues with migrants in Russia. We should have done it [a] long time ago, but because of political correctness and tolerance missed out on solutions to many problems.”
“It will solve all the issues with migrants in Russia. We should have done it [a] long time ago, but because of political correctness and tolerance missed out on solutions to many problems”
For many communities, privacy is security. Losing sight of this during any circumstances would be dangerous. Now, at the height of state and corporate power, it could put those who are already vulnerable and face discrimination at even greater risk.