Freedom of Information Law in the Age of Opaque Machines

By Divij Joshhi, 7th May 2021

In the last few decades, landmark freedom of information or right to information (FOI / RTI) and public records laws around the world have radically transformed the role of the citizen.

The citizen has gone from a passive receiver of “official” statements to an active agent able to interrogate the claims and decisions made by her government, by asking for files, records, information or inspection of the government archives. 

However, developments in the era of “digital government” may be unravelling the progress made by FOI laws and movements.

Evidence from across the world indicates that the increasing use of automated decision-making systems to determine important matters of public policy, ranging from policing to urban planning, is encroaching upon FOI law’s mandate of transparency.

At DFF’s annual strategy meeting in March, 2021, a group of concerned activists and academics, including myself, gathered to understand how strategic litigation and advocacy might advance the cause of freedom of information laws and the public values of transparency and accountability of automated decision-making.

…what should civil society be aware of regarding the impact of automated decisions on FOI laws?

So what should civil society be aware of regarding the impact of automated decisions on FOI laws, and what can it do to ensure the integrity of this tool of trust and transparency?

Part of the reason for this inadequacy in FOI laws owes to the nature of the underlying technologies used in automated decision-making. Consider, for example, the use of complex machine learning (ML) systems, widely used by government agencies today, in functions ranging from policing (e.g. facial recognition systems), to the adjudication of welfare claims. 

In many cases, the material documentation (i.e. what is recorded) of ML systems may not adequately capture the functioning of an automated decision-making system. Moreover, many of these systems possess an inherent opacity – their precise logics are unknowable even to the people who have designed them. The opacity of contemporary algorithmic systems used by governments has been well documented, and poses a substantial barrier to FOI processes.

…many of these systems possess an inherent opacity – their precise logics are unknowable even to the people who have designed them.

Another reason is the manner in which these systems are being integrated into government agencies. Often, many of these systems are procured from private contractors, who zealously protect various aspects crucial to understanding these ‘black box’ systems – including the data used, the logic inherent in the algorithms they rely upon, or other routine calculations that go into the making of automated-decision systems.

Most FOI laws around the world have exemptions for the protection of trade secrets or intellectual property, and governments and companies are routinely relying upon these exemptions to deny information and records requests. In the UK, for example, recent statistics have shown that four of the more important government departments only fully granted one in five FOI requests. Overall, of the total number of “resolvable” requests received by all departments during the survey period, almost the same number were rejected on the basis of an exemption or exception as were fully granted.

…the nature of digital governance infrastructures is often disaggregated, which can pose an obstacle to information requests

Moreover, the nature of digital governance infrastructures is often disaggregated – with different actors and agencies responsible for different elements of a technological system – which can pose an obstacle to information requests. Often, FOI requests are rejected because relevant documentation about an automated decision-making system was simply never produced by the system designers, or custody of the documents was not handed over to the agency responsible for using the technology.

Finally, FOI requests for information important to the audit and research of automated decision-making systems are, under various circumstances, denied on grounds relating to defence or national security, or on grounds of privacy (which may be implicated when interrogating the data on which algorithms have been trained or on which they operate). 

As automated decision-making encroaches upon the field of public policy and administrative decision-making, it is imperative to ensure that processes of public transparency like FOI are not debilitated. This requires policymakers, technologists and FOI activists to design mechanisms for ensuring that technologies integrated into public administration are able to generate and share appropriate public documentation which balances countervailing interests of confidentiality, security or privacy.

Public agencies should be impelled to substantiate grounds for refusal of FOI requests through available administrative and legal appellate mechanisms available. Careful and strategic litigation efforts, such as the one undertaken by ePantswo Foundation in Poland, in particular, could result in interpretations of FOI law to cover automated decisions and algorithmic systems and set important precedents to be followed by legal systems. 

Finally, there is a need for  deeper collaborations between the communities of activists working on FOI, and those working towards digital rights and responsible technology, particularly in order to tactically utilise existing FOI laws and documentation mechanisms in specific contexts in ways that can generate public transparency.

Divij Joshi is an independent lawyer and researcher exploring intersections of law, technology and society.

How Copyright Bots are Governing Free Speech Online

By Julia Reda, 3rd May 2021

Security camera with (C) Copyright logo beneath it, covered in a multicoloured splash

Police officers playing pop music in an effort to interrupt live streams of police operations. Autocratic regimes using social media’s upload filters to block critical reporting. “Reputation management” companies making fake copyright claims to erase unflattering news reports about their clients from search results. Unidentified fraudsters blocking reporting on a party’s political event by posing as a TV station.

These are all real examples of state or private censorship based on automated copyright enforcement online.

Digital rights organisations are already alarmed about the increasingly common voluntary use of upload filters by social media companies to help copyright holders remove alleged infringements from their services more efficiently.

Due to the inherent inability of such filters to detect legal uses such as quotations or fair use, as well as the lack of authoritative information about who is a legitimate rightholder for which copyrighted work, the automated removal of legal expression by online platforms is a common occurrence.

Digital rights NGO Electronic Frontier Foundation is cataloguing such (usually fully automated) wrongful removals in its Takedown Hall of Shame.

Now, a recent EU reform, the Directive on Copyright in the Digital Single Market, risks making the deployment of these technologies mandatory for a broad range of commercial platforms.  This poses new challenges for fundamental rights activists and legal scholars, as traditional freedom of expression frameworks, which assume a direct state intervention causing the suppression of speech, are often too simplistic to deal with these multi-sided scenarios.

How can we use free speech frameworks to challenge censorship through copyright enforcement?

That’s why the Digital Freedom Fund devoted a workshop at its 2021 strategy meeting to the question: How can we use free speech frameworks to challenge censorship through copyright enforcement?

I had the pleasure of facilitating this workshop and sharing lessons learned from my own strategic litigation project “control ©, hosted by German NGO Gesellschaft für Freiheitsrechte (GFF).

Conceptualising freedom of expression

Participants in the workshop examined the specific cases mentioned above to identify common threads and discuss potential avenues for litigation in order to protect the freedom of expression of Internet users.

What all these cases have in common is that they take place within what Jack Balkin has termed the “free speech triangle”, that is, in multi-sided platform environments where online platforms act as a third player in freedom of expression issues next to the state and the individual. But in the copyright context, even this three-sided model is too simplistic, given that (real or purported) copyright holders, or even individuals playing copyright-protected music in a live-stream, now use the automated filters established for copyright protection to restrict internet users’ freedom of expression.

While the large number of actors involved in copyright censorship cases increases their complexity, it also opens up new potential avenues for litigation. Aside from constitutional complaints against laws that mandate the use of upload filters, litigation against private actors engaging in or facilitating private censorship through copyright enforcement tools is also conceivable.

Copyright law itself can be a basis for challenging wrongful removals of legal expression by upload filters. If the removal is based on a blocking request from an unauthorised third party, the real author of the work in question could rely on their author’s right to attribution to take legal action against the impostor. However, this approach is only feasible if the impostor can be identified.

Copyright exceptions can also be invoked against the platform if the correct rightholder has requested the automated blocking of their work, but the copyright enforcement mechanism has failed to take into account legal uses such as quotation or parody. Unfortunately, even if the victim of unjustified blocking can show that their use is legal under copyright law, it is unclear whether a platform is required to protect it from being arbitrarily blocked by its upload filters.

…a platform may rely on its private terms and conditions, rather than the law, as a justification for removing content

For example, a platform may rely on its private terms and conditions, rather than the law, as a justification for removing content on request of rightholders and may ignore the existence of copyright exceptions in the process.

EU law also lacks a harmonised notice and action regime that could regulate platforms’ obligations to correct its blocking decisions, although the European Commission’s proposal for a Digital Services Act may soon change that.

Ironically, the otherwise rather problematic EU Directive on Copyright in the Digital Single Market (DSM Directive) will bring some much-needed clarity for users on this point, as it turns the copyright exceptions into enforceable users’ rights that platforms and rightholders must respect in their automated copyright enforcement systems.

…upload filters are incapable of distinguishing between infringements and legal uses under exceptions

Unfortunately, the directive fails to explain how this should work in practice, as upload filters are incapable of distinguishing between infringements and legal uses under exceptions. Simply stating that legal content must not be blocked and leaving the responsibility of actually designing those central fundamental rights safeguards to the EU Member States does not meet the requirements of the Charter of Fundamental Rights, our GFF study on the fundamental rights compliance of the DSM Directive concludes.

The role of the state

For litigation purposes, it is important to identify whether state intervention has played a role in the blocking of content and to analyse whether such intervention – directly or indirectly – led to the suppression of speech, the traditional scenario of restrictions on the fundamental right to freedom of expression.

As long as laws such as the DSM Directive are not yet applied and the use of upload filters by online platforms remains voluntary rather than a legal obligation, state intervention in these cases of suppression of speech is not self-evident.

Nevertheless, some of the specific cases examined in the workshop still involve state action. In the case of Turkish exiled journalists whose YouTube channel was shut down as a consequence of multiple copyright strikes, the false copyright claims originated from the Turkish state TV company TRT.

The journalists suspect that the Erdoğan administration has been using state-owned TRT’s status as a large rightsholder … to silence critical reporting on the government

The journalists suspect that the Erdoğan administration has been using state-owned TRT’s status as a large rightsholder, which gives it access to YouTube’s ContentID filtering system, to silence critical reporting on the government. In the multiple examples of police officers trying to interrupt live streams by playing pop music on their phones, participants found that the actions of those individual officers still constitute state action, even if the copyright enforcement system they try to exploit is used by social media platforms on a voluntary basis.

Even when no state action is involved, social media platforms could still have indirect fundamental rights obligations toward their users.

Polish NGO Panoptykon is already engaged in strategic litigation against Facebook over the arbitrary blocking of an NGO’s Facebook page.  Recognising the increasing importance of social media for the exercise of freedom of expression online, the European Commission’s proposal for a Digital Services Act includes an obligation on online platforms to pay due regard to the fundamental rights of users when enforcing their private terms and conditions (Art. 12(2)).

This new provision could strengthen strategic litigation against the blocking of lawful expression by automated copyright filters employed by platforms on a voluntary basis. In order for this provision to be effective, it must be coupled with strong transparency obligations and provisions on collective redress that allow users’ rights organizations to take legal action against structural overblocking of legal content by overzealous copyright filters.

Why awareness is key

A central outcome of the workshop is the realisation that all workshop participants were aware of examples of copyright censorship in their own countries, but that those cases rarely gained international prominence.

…it is crucial to build stronger narratives around the frequent mistakes and abuses of upload filters

In order to raise the awareness of policy-makers and the leadership of platforms companies about the danger of copyright filters for freedom of expression, it is crucial to build stronger narratives around the frequent mistakes and abuses of upload filters.

Building on the experience with other excellent mapping efforts in the digital rights space, such as Austrian privacy NGO noyb’s knowledge wiki GDPRHub and EFF’s Takedown Hall of Shame, a complementary mapping of copyright censorship cases in the EU was envisioned.

Such a database could form the basis for an awareness-raising campaign and also make it easier for the victims of copyright censorship to connect with fundamental rights NGOs that may help them seek justice.

Julia Reda is a copyright expert and project lead at Gesellschaft für Freiheitsrechte (GFF).

Photo by Andres Umana on Unsplash

Empowering Workers Through Digital Rights

By Jill Toh, 30th April 2021

Two neon yellow taxi signs against the blurred lights of the city at night

The gig economy and “platformisation” of labour have become intertwined with core digital rights issues. How can the field best support workers on the ground, who are struggling against increased surveillance, data collection and algorithmic management?

At the Digital Freedom Fund’s online annual strategy meeting in February, DFF expanded its usual scope of attendees to also include communities outside of the digital rights field.

Activists working on civil and economic rights, as well as gig workers, were included in the various discussions highlighting the way in which technology now impacts the advocacy of a much broader range of organisations.

In a session on the gig economy and platform-worker relations which I had the opportunity and privilege to facilitate, our discussions focused on (a) algorithmic management and the increasing use of surveillance technologies on workers in the gig economy, (b) the need to engage with communities directly impacted by those technologies and to cultivate reflexivity within civil society, researchers and academics, and (c) building worker power through control over their own data.

The session took place just days before the Supreme Court ruling in the United Kingdom where the court upheld an earlier employment tribunal ruling that Uber drivers under UK law are “limb (b)” workers, rather than self-employed contractors.

In practice, this means that drivers will now have basic protections such as minimum wages, holiday pay, pension, protection against discrimination, union rights, whistleblower safeguards and so forth, but there is still no right to protection for unfair dismissal. Additionally, enforcement and compliance of the ruling has yet to be determined. 

…drivers will now have basic protections such as minimum wages, holiday pay, pension, protection against discrimination, union rights, whistleblower safeguards

In parallel, cases on gig workers exercising their General Data Protection Regulation rights against Uber and Ola were pending before the Amsterdam District Court.

The questions raised here include access to data and transparency in algorithmic management related to performance and “robo-firing”. Drivers want to gain insight to calculate their minimum wages, prove an employment relationship, determine if there is discrimination, build collective bargaining and advocacy and lastly, for creating a data trust to ultimately give them more power over their working conditions.

These cases highlight that as workplaces are becoming increasingly augmented by the use of surveillance technologies, it becomes pertinent to ensure that people are protected from harms caused by the use of these technologies. Some of the issues raised in the gig economy can also be a lesson on how we can begin to address some of the complexities of workplace surveillance in general.

This is all the more important as, in the recent draft EU Regulation on Artificial Intelligence, workplace protections are at risk of being watered down.  Indeed, across many tech regulations, protection for workers tends to be downplayed or ignored.


In the ride-hailing and delivery sectors, there has been a particular increase in the use of surveillance technologies, including facial recognition technology for real-time ID checks.

In the ride-hailing and delivery sectors, there has been a particular increase in the use of surveillance technologies

The deployment of these technologies is often driven by regulators attempting to tackle road safety and security.

For example, in the case of Uber UK, Uber’s initial commercial license was reinstated by the regulator, Transport for London (TfL), in 2020, when the company promised to roll out facial recognition technology – enabled by Microsoft’s face-matching software – to ensure safety checks of drivers.

The licence had been cancelled because of issues related to fraud, insurance and safety. However, many drivers have experienced the roll out of these technologies as disproportionate, racist and discriminatory. When drivers – often people of colour – fail their facial recognition checks, they are reported to TfL and may lose their license to drive. The App Drivers and Couriers Union (ADCU) has identified seven cases of failed instances leading to drivers losing their licenses and their jobs.

In general, algorithmic management and surveillance technologies deployed by gig companies on their workforce often lack transparency, with no form of redress through their dispatch systems that decide who gets to work, how jobs are allocated, how wages are calculated, how workers are deactivated or fired for assumed fraudulent activity.

In many cases, worker surveillance technologies have been proven to be unreliable. However, the lack of transparency often means that drivers have no idea why they have been deactivated. These technologies are often discriminatory and cause tangible harm to drivers who usually do not understand the technology behind it and often lack any form of recourse. 

Structural injustice

While private tech companies and the use of surveillance technologies pose big risks for workers, state and government authorities have also been complicit in abetting these problems.

In the UK, the majority of full-time Uber drivers belong to Black, Asian, or Minority Ethnic (BAME) communities who often bear the brunt of racist and discriminatory actions, inflicted by regulators, and exacerbated by private companies that utilise these seemingly “neutral” technologies.

…they found out that their social media accounts were being monitored

There is a history of TfL adopting discriminatory rules and procedures that primarily affect marginalised workforces. For instance, when drivers submitted a freedom of information request during the period when private hire drivers were demonstrating against the congestion charge in London (a tax imposed mainly on private hire drivers who come from overwhelmingly BAME backgrounds), they found out that their social media accounts were being monitored.

This highlights the existence of institutional problems and the state’s inability to protect the most precarious workers. Regulators and other public bodies also often introduce new problems through technology-mediated systems. 

A broader coalition

If the state is unable to protect its most precarious workers and citizens, how can civil society, academics, and researchers help?

At DFF’s strategy meeting, participants highlighted that digital rights organisations, strategic litigators, and researchers, tend to not have access to communities directedly impacted by algorithmic management and surveillance technologies.

Much of the discussion during the session was therefore focused on the question how we as digital rights organisations, academics, or researchers understand class struggle. In this context, Yaseen Aslam, one of the lead claimants in the UKSC case, shared his and other fellow drivers’ experiences of working for tech companies like Uber and his collaboration with James Farrar of the App Drivers and Couriers Union (ADCU) and Workers Info Exchange (WIE).

Yaseen particularly emphasised the need to have perspectives beyond the digital rights and strategic litigation field, thus ensuring that those that have been directly impacted by the actions and business models of tech companies and the technologies they build are involved in the action. 

Yaseen highlighted that civil society should focus on empowering direct action within affected communities and to take care not to hijack or exploit those communities and movements

Equally, participants also stressed the need to cultivate better reflexivity in our own efforts to help. Yaseen highlighted that civil society should focus on empowering direct action within affected communities and to take care not to hijack or exploit those communities and movements.

Change needs to come from within those worker communities, and the value that civil society can provide is to build trust. This inevitably takes time, and should encourage us to think of ways to install the channels and capacity to help when and where needed. 

Empowerment through data

There is much potential for workers to harness their digital rights in order to address power asymmetries that arise from the increasing algorithmic power of tech companies over workers in the gig economy and beyond.

Workers Info Exchange, a non-profit that focuses on helping workers obtain and gain insight to data collected by private tech companies in the gig economy, is one key organisation fighting to claim back workers’ agency through building a data trust for workers.

With the recent judgments at the Amsterdam District Court on workers exercising their GDPR rights, there is now momentum to harness digital rights as avenues to explore legal challenges, as well as to use digital rights as a springboard for collective action and organising.

Activists, researchers and groups in the digital rights sphere are well-poised to contribute in aiding and sustaining this momentum.

As digital rights organisations, we can take several steps to broaden our opportunities for collaboration. In particular, we should:

  • gain a better understanding of how workers identify with algorithmic control and related discrimination issues and how to develop legal (and non-legal) practices to help with these issues;
  • identify common ground related to the use of technologies between different sectors within the gig economy;
  • centre the struggles and issues of affected communities by taking time to invest and build trust in order to instil capacity to empower direct action from the worker communities’ themselves;
  • expand digital rights into other areas and connect with communities on the ground; and
  • contribute to the efforts of grassroots organisations and unions working on data rights as well as doing a better task at explaining digital rights to workers in ways and language that are accessible.

Ultimately, the protection of workers’ rights should not solely fall upon members of already marginalised communities, who, at any rate, are often the proverbial canary in the coal mine.

…the protection of workers’ rights should not solely fall upon members of already marginalised communities

While laws exist to protect (some) individuals, the enforcement of many of these laws is failing. Hence, it is important to think about the limits of the law, and also how the law can harness or support existing efforts in building collective action.

Individuals and communities in the digital rights space have the knowledge, network and skills to be part of building worker power in the gig economy. We should use this to collaborate with impacted communities to protect them and everyone from unequal power dynamics.

Jill Toh is a PhD candidate at the Institute for Information Law, University of Amsterdam.

Photo by Lexi Anderson on Unsplash