A case for knowledge-sharing between technologists and digital rights litigators
A case for knowledge-sharing between technologists and digital rights litigators
By Aurum Linh, 6th December 2019
“Almost no technology has gone so entirely unregulated, for so long, as digital technology.”
— Microsoft President, Brad Smith
Big technology companies have become powers of historic proportions. They are in an unprecedented position of power: able to surveil, prioritize, and interfere with the transmission of information to over two billion users in multiple nations. This architecture of surveillance has no basis for comparison in human history.
Domestic regulation has struggled to keep pace with the unprecedented, rapid growth of digital platforms: who operate across borders and have established power on a global scale. Regulatory efforts by data protection, competition and tax authorities worldwide have largely failed to obstruct the underlying drivers of the surveillance-based business model.
It is, therefore, vital that litigators and technologists work together to strategize on how the law can be most effectively harnessed to dismantle these drivers and hold those applying harmful tech to account. As a Mozilla Fellow embedded with the Digital Freedom Fund, I am working on a project that I hope can help break down knowledge barriers between litigators and technologists. Read on to find you how you can get involved too.
Regulating Big Tech
The surveillance-based business models of digital platforms have embedded knowledge asymmetries into the structure of how their products operate. These gaps exist between technology companies and their users, as well as the governments that are supposed to be regulating them. Shoshana Zuboff illustrates this in The Age of Surveillance Capitalism, where she observes that “private surveillance capital has institutionalized asymmetries of knowledge unlike anything ever seen in human history. They know everything about us; we know almost nothing about them.”
Zuboff makes the case that the presence of state surveillance and its capitalist counterpart means that digital technology is separating the citizens in all societies into two groups: the watchers (invisible, unknown and unaccountable) and the watched. Their technologies are opaque by design and foster user ignorance. This has debilitating consequences for democracy, as asymmetries of knowledge indicate asymmetries in power. Whereas most democratic societies have at least some degree of oversight of state surveillance, we currently have almost no regulatory oversight of its privatised counterpart.
In essentially law-free territory, Google has developed its surveillance-based, rights-violating business model. They digitised and stored every book ever printed, regardless of copyright issues. They photographed every street and house on the planet, without asking anyone’s permission. Amnesty International’s recent report, Surveillance Giants, highlights Google and Facebook’s track record of misleading consumers about their privacy, data collection, and advertising targeting practices. During the development of Google Street View in 2010, for example, Google’s photography cars secretly captured private email messages and passwords from unsecured wireless networks. Facebook has acknowledged performing behavioural experiments on groups of people— lifting (or depressing) users’ moods by showing them different posts on their feed. Furthermore, Facebook has acknowledged that it knew about the data abuses of political micro-targeting firm Cambridge Analytica months before the scandal broke. More recently, in early 2019, journalists discovered that Google’s Nest ‘smart home’ devices contained a microphone they failed to inform the public about.
We can see these asymmetries mirrored in the public sector too. ProPublica’s examination of the COMPAS algorithm is a clear example of biased algorithms being used by the state to make life-changing decisions in people’s lives. The algorithm is increasingly being used nationwide in pre-trial and sentencing, the so-called “front-end” of the criminal justice system, and has been found to be significantly biased against Black people.
A high-profile case was that of Eric Loomis, who courts sentenced to the maximum penalty on two counts after reviewing the predictions derived from the COMPAS risk-assessment algorithm, despite his claim that using a proprietary predictive risk assessment in sentencing violated his due process rights. The Wisconsin Supreme Court dismissed the due process claims, effectively affirming the use of predictive assessments in sentencing decisions. Justice Shirley S. Abrahamson noted, “this court’s lack of understanding of COMPAS was a significant problem in the instant case. At oral argument, the court repeatedly questioned both the State’s and the defendant’s counsel about how COMPAS works. Few answers were available.”
In How to Argue with an Algorithm: Lessons from the COMPAS ProPublica Debate, Anne L. Washington notes, “[b]y ignoring the computational procedures that processed the input data, the court dismissed an essential aspect of how algorithms function and overlooked the possibility that accurate data could produce an inaccurate prediction. While concerns about data quality are necessary, they are not sufficient to challenge, defend, nor improve the results of predictive algorithms. How algorithms calculate data is equally worthy of scrutiny as the quality of the data themselves. The arguments in Loomis revealed a need for the legal scholars to be better connected to the cutting-edge reasoning used by data science practitioners.”
In order to meaningfully change the context that allows surveillance-based, human rights violating business models to thrive in the tech sector, lawmakers need to deeply understand what legal requirements will change its fundamental structure for the better. This is rendered near impossible since the tech ecosystem is designed with multiple layers of information opaqueness. One key asymmetry is between lawmakers and litigators and the people who are building the technologies that they are attempting to regulate. The technology industry has become so specialized in its practice, yet so broad in its application, the knowledge gaps cause the regulation to be surface-level and ineffective when measured in terms of impact on the underlying system that allowed ethical violations in the first place. When governments, courts, or regulators do get involved with disciplining these companies, the consequences do not actually hurt them. It does not affect the circumstances that caused the violation, nor does it fundamentally change its structure of operations or influence. An example of this can be found in Amnesty’s recent report;
“In June 2019, the US Federal Trade Commission (FTC) levied a record $5bn penalty against Facebook and imposed a range of new privacy requirements on the company, following an investigation in the wake of the Cambridge Analytica scandal. Although the fine is the largest recorded privacy enforcement action in history, it is still relatively insignificant in comparison to the company’s annual turnover and profits – illustrated by the fact that after the fine was announced, Facebook’s share price went up. More importantly, the settlement did not challenge the underlying model of ubiquitous surveillance and behavioural profiling and targeting. As FTC Commissioner Rohit Chopra stated in a dissenting opinion ‘The settlement imposes no meaningful changes to the company’s structure or financial incentives, which led to these violations. Nor does it include any restrictions on the company’s mass surveillance or advertising tactics.’”
This architecture of surveillance has no basis for comparison in human history. Lawmakers struggle to grasp how its technology works and which problems need to be addressed. Inaction does not reflect a lack of will, so much as a lack of sharing knowledge between bodies of expertise. This system spans entire continents and touches at least a third of the world’s population, yet has gone relatively unregulated for 20 years. There is an urgent need for technologists that can break down the barriers of knowledge that keep meaningful legal action from being taken.
A Project to Facilitate Knowledge-Sharing & Knowledge-Building
In partnership with Mozilla and the Digital Freedom Fund, we are building a network of technologists, data scientists, lawyers, litigators, and digital rights activists. Jonathan is a lawyer based in London who is collaborating with me on this project. You can read his perspective on this project here. With the help of this network, we would like to create two guides that can build the knowledge and expertise of litigators and technologists when it comes to each other’s disciplines, so they can collaborate and coordinate effectively on cases that seek to protect and promote our human rights while holding the “watchers” to account.
If you are a digital rights activist, technologist, or lawyer, you can contribute to this project by taking this survey and getting in touch with us. Otherwise, you can help by sharing these blog posts with your networks. We look forward to hearing from you!