Advancing Digital Rights through Competition Law

By Jonathan McCully, 9th December 2019

How can we leverage competition law to protect and promote our digital rights? This question was raised during DFF’s strategy meeting earlier this year, where we asked digital rights organisations from across Europe what their priorities were and what DFF could do to best support them. A number of participants talked about their interest in engaging with competition law claims, alongside data protection and human rights ones, to hold platforms to account for abusive and infringing conduct.

The topic had come up at previous DFF events, which is not surprising given that European institutions in recent years have demonstrated a willingness to enforce aspects of competition law against Big Tech companies. In June 2017, for example, the European Commission fined Google €2.42 billion for abusing its dominance in the search engine market. Since then, Google has been fined two further times by the Commission over anti-competitive practices. Discussing this at the strategy meeting, participants concluded that there may be opportunities to enforce this area of law in a way that also advances digital rights. Before pursuing such opportunities, however, they noted it would be useful to build more knowledge on how competition law might be applicable to the platform or big data economy, and how such issues can be litigated.

To help facilitate deeper thinking and discussion on the topic, DFF has published “A Short Guide to Competition Law for Digital Rights Litigators.” The guide, written by Aaron Khan of Brick Court Chambers, provides an overview of the key principles of EU competition law accompanied by specific examples of how these principles might apply to the digital or technology sector. The guide also sets out steps digital rights litigators can take if they identify potential infringements of competition law in the digital space, including an overview of the positive and negative aspects of pursuing complaints with the European Commission and/or litigation on the topic. We hope that this guide will be a useful resource for digital rights litigators who wish to engage in competition law issues as part of their ongoing and future work.

This week, we are excited to be testing this guide at a two-day training workshop in Brussels. The workshop, supported by the NetGain partnership, will bring together over twenty digital and consumer rights organisations from the US, Europe and Latin America. During the two days, participants will consider the EU competition law framework and discuss potential opportunities to enforce this framework as a means for protecting and promoting digital rights. Lawyers who have already identified and challenged competition law infringements by Big Tech companies, such as Google and Facebook, will share their personal experiences and organisations working on competition law issues in Europe and America will discuss the similarities and differences in approach in their jurisdictions, and where there might be opportunities for transatlantic co-operation on cases.

Next year, we would like to build on the work covered during this workshop with a follow-up meeting in Spring 2020, where participants will carry out some deeper planning around how they can practically engage with competition law issues in their work, including through litigation. We look forward to seeing how we can support this work in the new year. In the meantime, keep an eye on our blog for updates and guest posts following the workshop to find out what was discussed at the event!

A project to demystify litigation and artificial intelligence

By Jonathan McCully, 6th December 2019

We are living in an age of artificial intelligence. We may not see anthropomorphic robots roaming our streets, but smart machines are increasingly making choices that can have a significant impact on our lives and our rights. Autonomous systems have been built to decide whether we should be hired for a job, whether we are entitled to social welfare benefits, whether our online speech should be censored, and whether we should be subject to police intervention. These systems are becoming more ubiquitous, touching upon many aspects of society.

As is the case with the introduction of any new technology, the law is trying to catch up with these emerging developments in machine capability. Since the law is an indispensable tool for ensuring that our rights are protected and vindicated, it is vitally important that it is not left behind – unenforced and ineffective. It will be before the courts that old and new laws will be applied, disputed and litigated for the purpose of safeguarding and guaranteeing our rights in the age of artificial intelligence.

I am working with Aurum Linh, a technologist and product developer, on an exciting new project that seeks to break down knowledge barriers between litigators and technologists so they can work more effectively together on impactful AI-related litigation. We would love for you to get involved too!

What is the project?

Aurum and I are part of an inspiring cohort of technologists, activists, lawyers, and scientists who are working on projects to promote trustworthy AI as part of a Mozilla Fellowship. Our particular project is aimed at producing a set of guides that can help build stronger litigation on AI and human rights.

The first guide will be aimed at individuals who have a technology background, such as technologists, engineers, developers, and computer scientists, and will seek to demystify litigation and how it can be used to protect our rights against harmful AI systems. It will also explain the important role that they, and their expertise, can play in strengthening litigation efforts.

The second guide will be aimed at lawyers and will seek to demystify the technology that may crop up in their cases. We hope that this guide will assist lawyers in effectively identifying and pursuing legal claims challenging human rights violations caused by AI. The guide will also provide further insights on how they can collaborate with technologists in their litigation.

Both guides will be developed through regular consultation with the intended audiences to ensure the resources meet their needs. So, please do read on to find out how you can get involved.

Why do I think the project is important?

As human rights cases will increasingly have an AI element to them, this project seeks to provide information and guidance to lawyers and technologists so that they can learn more about each other’s disciplines and expertise. We hope that this can then strengthen AI-related litigation efforts by fostering greater collaboration and knowledge-sharing between these stakeholders. It is hoped that, by bringing stronger cases, they can then help set precedent that ensures greater transparency, accountability and adherence to human rights standards in the design and use of AI.

I am approaching this project as someone with a legal/litigation background. Aurum, who is approaching our project from a technologist’s perspective, has written a fantastic blog on why they believe these guides are important.

I am passionate about using the courts and the law as a mechanism to improve the world in which we live. For centuries, litigation has been a valuable tool for securing changes in law, practice and public awareness on a variety of issues. There are many examples of ground-breaking court decisions in areas ranging from climate change, arbitrary detention, and the death row phenomena, to gay marriage, abortion, and the right to food. With AI becoming ever more pervasive in our lives, I believe we will increasingly see AI-related rights issues being brought before our courts.

In fact, we can already see such cases before our courts. Last month, for instance, a court in Amsterdam overturned a disproportionate debt claim taken against an individual for €0.05. The Dutch court gleaned that the claim had been processed by an automated system, and it warned the company responsible that it should set up its system in such a way that some human control takes place before a debt summons is made. Similarly, other examples of “Robodebt” systems are currently being challenged before the courts in other jurisdictions as well. In the UK last month, an appeal was also granted in a judicial challenge to the use of facial recognition technology by a police force in Wales and, in the US, a number of recent examples of cases challenging the use of automated systems by public bodies can be found in AI Now’s reports on “Litigating Algorithms” from 2018 and 2019.

Even cases that, on their face, do not strictly deal with a technological issue will need to be litigated and argued within the digital reality in which we live. The deployment of new technologies can mean that harmful societal issues are replicated, embedded or even exacerbated, and the arguments we make before the courts need to be informed by these very real threats. To use the recent example of a case before the US Supreme Court on the justiciability of partisan gerrymandering, Justice Kagan, in her dissenting opinion, warned about the risks to democracy posed by AI-driven gerrymandering. She noted that “big data and modern technology… make today’s gerrymandering altogether different from the crude line drawing of the past.”

How can you get involved?

We want to make sure the guides are as useful and beneficial as possible for the communities that they seek to serve. This is where you come in. We want to hear from lawyers, technologists, software engineers, data scientists, computer scientists and digital rights activists about what they would like to see included in these guides. We would also be delighted to hear from individuals who have experience working on AI-related litigation, and who have lessons or ideas to share with us.

You can get involved by completing a short survey or, if you prefer, you can reach out to me directly by email if you would like to have a chat about the project. We look forward to hearing from you!

A case for knowledge-sharing between technologists and digital rights litigators

By Aurum Linh, 6th December 2019

“Almost no technology has gone so entirely unregulated, for so long, as digital technology.”

Microsoft President, Brad Smith

Big technology companies have become powers of historic proportions. They are in an unprecedented position of power: able to surveil, prioritize, and interfere with the transmission of information to over two billion users in multiple nations. This architecture of surveillance has no basis for comparison in human history.

Domestic regulation has struggled to keep pace with the unprecedented, rapid growth of digital platforms: who operate across borders and have established power on a global scale. Regulatory efforts by data protection, competition and tax authorities worldwide have largely failed to obstruct the underlying drivers of the surveillance-based business model.

It is, therefore, vital that litigators and technologists work together to strategize on how the law can be most effectively harnessed to dismantle these drivers and hold those applying harmful tech to account. As a Mozilla Fellow embedded with the Digital Freedom Fund, I am working on a project that I hope can help break down knowledge barriers between litigators and technologists. Read on to find you how you can get involved too.

Regulating Big Tech

The surveillance-based business models of digital platforms have embedded knowledge asymmetries into the structure of how their products operate. These gaps exist between technology companies and their users, as well as the governments that are supposed to be regulating them. Shoshana Zuboff illustrates this in The Age of Surveillance Capitalism, where she observes that “private surveillance capital has institutionalized asymmetries of knowledge unlike anything ever seen in human history. They know everything about us; we know almost nothing about them.”

Zuboff makes the case that the presence of state surveillance and its capitalist counterpart means that digital technology is separating the citizens in all societies into two groups: the watchers (invisible, unknown and unaccountable) and the watched. Their technologies are opaque by design and foster user ignorance. This has debilitating consequences for democracy, as asymmetries of knowledge indicate asymmetries in power. Whereas most democratic societies have at least some degree of oversight of state surveillance, we currently have almost no regulatory oversight of its privatised counterpart.

In essentially law-free territory, Google has developed its surveillance-based, rights-violating business model. They digitised and stored every book ever printed, regardless of copyright issues. They photographed every street and house on the planet, without asking anyone’s permission. Amnesty International’s recent report, Surveillance Giants, highlights Google and Facebook’s track record of misleading consumers about their privacy, data collection, and advertising targeting practices. During the development of Google Street View in 2010, for example, Google’s photography cars secretly captured private email messages and passwords from unsecured wireless networks. Facebook has acknowledged performing behavioural experiments on groups of people— lifting (or depressing) users’ moods by showing them different posts on their feed. Furthermore, Facebook has acknowledged that it knew about the data abuses of political micro-targeting firm Cambridge Analytica months before the scandal broke. More recently, in early 2019, journalists discovered that Google’s Nest ‘smart home’ devices contained a microphone they failed to inform the public about.

We can see these asymmetries mirrored in the public sector too. ProPublica’s examination of the COMPAS algorithm is a clear example of biased algorithms being used by the state to make life-changing decisions in people’s lives. The algorithm is increasingly being used nationwide in pre-trial and sentencing, the so-called “front-end” of the criminal justice system, and has been found to be significantly biased against Black people.

A high-profile case was that of Eric Loomis, who courts sentenced to the maximum penalty on two counts after reviewing the predictions derived from the COMPAS risk-assessment algorithm, despite his claim that using a proprietary predictive risk assessment in sentencing violated his due process rights. The Wisconsin Supreme Court dismissed the due process claims, effectively affirming the use of predictive assessments in sentencing decisions. Justice Shirley S. Abrahamson noted, “this court’s lack of understanding of COMPAS was a significant problem in the instant case. At oral argument, the court repeatedly questioned both the State’s and the defendant’s counsel about how COMPAS works. Few answers were available.”

In How to Argue with an Algorithm: Lessons from the COMPAS ProPublica Debate, Anne L. Washington notes, “[b]y ignoring the computational procedures that processed the input data, the court dismissed an essential aspect of how algorithms function and overlooked the possibility that accurate data could produce an inaccurate prediction. While concerns about data quality are necessary, they are not sufficient to challenge, defend, nor improve the results of predictive algorithms. How algorithms calculate data is equally worthy of scrutiny as the quality of the data themselves. The arguments in Loomis revealed a need for the legal scholars to be better connected to the cutting-edge reasoning used by data science practitioners.”

In order to meaningfully change the context that allows surveillance-based, human rights violating business models to thrive in the tech sector, lawmakers need to deeply understand what legal requirements will change its fundamental structure for the better. This is rendered near impossible since the tech ecosystem is designed with multiple layers of information opaqueness. One key asymmetry is between lawmakers and litigators and the people who are building the technologies that they are attempting to regulate. The technology industry has become so specialized in its practice, yet so broad in its application, the knowledge gaps cause the regulation to be surface-level and ineffective when measured in terms of impact on the underlying system that allowed ethical violations in the first place. When governments, courts, or regulators do get involved with disciplining these companies, the consequences do not actually hurt them. It does not affect the circumstances that caused the violation, nor does it fundamentally change its structure of operations or influence. An example of this can be found in Amnesty’s recent report;

“In June 2019, the US Federal Trade Commission (FTC) levied a record $5bn penalty against Facebook and imposed a range of new privacy requirements on the company, following an investigation in the wake of the Cambridge Analytica scandal. Although the fine is the largest recorded privacy enforcement action in history, it is still relatively insignificant in comparison to the company’s annual turnover and profits – illustrated by the fact that after the fine was announced, Facebook’s share price went up. More importantly, the settlement did not challenge the underlying model of ubiquitous surveillance and behavioural profiling and targeting. As FTC Commissioner Rohit Chopra stated in a dissenting opinion ‘The settlement imposes no meaningful changes to the company’s structure or financial incentives, which led to these violations. Nor does it include any restrictions on the company’s mass surveillance or advertising tactics.’”

This architecture of surveillance has no basis for comparison in human history. Lawmakers struggle to grasp how its technology works and which problems need to be addressed. Inaction does not reflect a lack of will, so much as a lack of sharing knowledge between bodies of expertise. This system spans entire continents and touches at least a third of the world’s population, yet has gone relatively unregulated for 20 years. There is an urgent need for technologists that can break down the barriers of knowledge that keep meaningful legal action from being taken.

A Project to Facilitate Knowledge-Sharing & Knowledge-Building

In partnership with Mozilla and the Digital Freedom Fund, we are building a network of technologists, data scientists, lawyers, litigators, and digital rights activists. Jonathan is a lawyer based in London who is collaborating with me on this project. You can read his perspective on this project here. With the help of this network, we would like to create two guides that can build the knowledge and expertise of litigators and technologists when it comes to each other’s disciplines, so they can collaborate and coordinate effectively on cases that seek to protect and promote our human rights while holding the “watchers” to account.

If you are a digital rights activist, technologist, or lawyer, you can contribute to this project by taking this survey and getting in touch with us. Otherwise, you can help by sharing these blog posts with your networks. We look forward to hearing from you!