Why COVID-19 is a Crisis for Digital Rights

By Nani Jansen Reventlow, 16th April 2020

Street art of a black face mask with "COVID-19" writing

The COVID-19 pandemic has triggered an equally urgent digital rights crisis.

New measures being hurried in to curb the spread of the virus, from “biosurveillance” and online tracking to censorship, are potentially as world-changing as the disease itself. These changes aren’t necessarily temporary, either: once in place, many of them can’t be undone.

That’s why activists, civil society and the courts must carefully scrutinise questionable new measures, and make sure that – even amid a global panic – states are complying with international human rights law.

Human rights watchdog Amnesty International recently commented that human rights restrictions are spreading almost as quickly as coronavirus itself. Indeed, the fast-paced nature of the pandemic response has empowered governments to rush through new policies with little to no legal  oversight.

There has already been a widespread absence of transparency and regulation when it comes to the rollout of these emergency measures, with many falling far short of international human rights standards.

Tensions between protecting public health and upholding people’s basic rights and liberties are rising. While it is of course necessary to put in place safeguards to slow the spread of the virus, it’s absolutely vital that these measures are balanced and proportionate.

Unfortunately, this isn’t always proving to be the case.

The Rise of Biosurveillance

A panopticon world on a scale never seen before is quickly materialising.

“Biosurveillance” which involves the tracking of people’s movements, communications and health data has already become a buzzword, used to describe certain worrying measures being deployed to contain the virus.

A panopticon world on a scale never seen before is quickly materialising

The means by which states, often aided by private companies, are monitoring their citizens are increasingly extensive: phone data, CCTV footage, temperature checkpoints, airline and railway bookings, credit card information, online shopping records, social media data, facial recognition, and sometimes even drones.

Private companies are exploiting the situation and offering rights-abusing products to states, purportedly to help them manage the impact of the pandemic. One Israeli spyware firm has developed a product it claims can track the spread of coronavirus by analysing two weeks’ worth of data from people’s personal phones, and subsequently matching it up with data about citizens’ movements obtained from national phone companies.

In some instances, citizens can also track each other’s movements leading to not only vertical, but also horizontal sharing of sensitive medical data.

Not only are many of these measures unnecessary and disproportionately intrusive, they also give rise to secondary questions, such as: how secure is our data? How long will it be kept for? Is there transparency around how it is obtained and processed? Is it being shared or repurposed, and if so, with who?

Censorship and Misinformation

Censorship is becoming rife, with many arguing that a “censorship pandemic” is surging in step with COVID-19.

Oppressive regimes are rapidly adopting “fake news” laws. This is ostensibly to curb the spread of misinformation about the virus, but in practice, this legislation is often used to crack down on dissenting voices or otherwise suppress free speech. In Cambodia, for example, there have already been at least 17 arrests of people for sharing information about coronavirus.

Oppressive regimes are rapidly adopting “fake news” laws

At the same time, many states have themselves been accused of fuelling disinformation to their citizens to create confusion, or are arresting those who express criticism of the government’s response.

As well as this, some states have restricted free access to information on the virus, either by blocking access to health apps, or cutting off access to the internet altogether.

An all-seeing, prisonlike panopticon

AI, Inequality and Control

The deployment of AI can have consequences for human rights at the best of times, but now, it’s regularly being adopted with minimal oversight and regulation.

AI and other automated learning technology are the foundation for many surveillance and social control tools. Because of the pandemic, it is being increasingly relied upon to fight misinformation online and process the huge increase in applications for emergency social protection which are, naturally, more urgent than ever.

Prior to the COVID-19 outbreak, the digital rights field had consistently warned about the human rights implications of these inscrutable “black boxes”, including their biased and discriminatory effects. The adoption of such technologies without proper oversight or consultation should be resisted and challenged through the courts, not least because of their potential to exacerbate the inequalities already experienced by those hardest hit by the pandemic.

Eroding Human Rights

Many of the human rights-violating measures that have been adopted to date are taken outside the framework of proper derogations from applicable human rights instruments, which would ensure that emergency measures are temporary, limited and supervised.

Legislation is being adopted by decree, without clear time limitations

Legislation is being adopted by decree, without clear time limitations, and technology is being deployed in a context where clear rules and regulations are absent.

This is of great concern for two main reasons.

First, this type of “legislating through the back door” of measures that are not necessarily temporary avoids going through a proper democratic process of oversight and checks and balances, resulting in de facto authoritarian rule.

Second, if left unchecked and unchallenged, this could set a highly dangerous precedent for the future. This is the first pandemic we are experiencing at this scale – we are currently writing the playbook for global crises to come.

If it becomes clear that governments can use a global health emergency to instate human rights infringing measures without being challenged or without having to reverse these measures, making them permanent instead of temporary, we will essentially be handing over a blank cheque to authoritarian regimes to wait until the next pandemic to impose whatever measures they want.

We are currently writing the playbook for global crises to come

Therefore, any and all measures that are not strictly necessary, sufficiently narrow in scope, and of a clearly defined temporary nature, need to be challenged as a matter of urgency. If they are not, we will not be able to push back on a certain path towards a dystopian surveillance state.

Litigation: New Ways to Engage

In tandem with advocacy and policy efforts, we will need strategic litigation to challenge the most egregious measures through the court system. Going through the legislature alone will be too slow and, with public gatherings banned, public demonstrations will not be possible at scale.

The courts will need to adapt to the current situation – and are in the process of doing so – by offering new ways for litigants to engage. Courts are still hearing urgent matters and questions concerning fundamental rights and our democratic system will fall within that remit. This has already been demonstrated by the first cases requesting oversight to government surveillance in response to the pandemic.

These issues have never been more pressing, and it’s abundantly clear that action must be taken.

At DFF, we’re here to help. If you have an idea for a case or for litigation, please apply for a grant now.

Images by Adam Niescioruk on Unsplash and I. Friman on Wikipedia

Accessing Justice in the Age of AI

By Alexander Ottosson, 9th April 2020

Facade of a court building

There is no doubt that the continuous development of artificial intelligence (AI) has a lot of upsides. It has the potential to revolutionise our industries, streamline our healthcare systems, mitigate climate change, reduce the cost of public services, and much more.

But there is simultaneously a great deal of risks involved in its development and use – including the risk of unlawful discrimination, invasion of privacy, interferences with freedom of expression, and failure to uphold due process and the right to a fair trial.

I recently had the opportunity to discuss several of these AI-related challenges at Digital Freedom Fund’s annual strategy meeting in Berlin. One of the sessions, which I had the privilege to facilitate, concerned the relationship between the use of AI and access to justice. The aim of the session was to ascertain if and to what extent best practices developed in the context of cases concerning secret surveillance could be transposed to cases concerning government use of AI.

As the Irish judge Sir James Mathew quite aptly put it already back in the Victorian era: “[j]ustice is open to all, like the Ritz hotel.”

Inherently in cases concerning both secret surveillance and AI, legal challenges are not only complex on account of substantive law, but are also ripe with technical and evidentiary difficulties. The transfer of rights from paper to practice is thus costly and, in many cases, unrealistic for most people. As the Irish judge Sir James Mathew quite aptly put it already back in the Victorian era: “[j]ustice is open to all, like the Ritz hotel.”

Like the Ritz – or Radisson or Best Western or indeed any hotel – justice is open to all merely in theory. In reality, access to justice is often limited due to, among other things, structural injustices, inadequate financial resources, lack of knowledge, time-constraints or ineffective remedies.

What we set out to explore during our session was how access to justice, in the context of AI, could be improved, so as to move us closer to the ideal that justice should be open to all not only in theory, but also in practice. To that end, the following lessons from the secret surveillance context are of particular note.

Transparency to Tackle Abuse

In cases concerning secret surveillance, the European Court of Human Rights (ECtHR) has held that “notification of surveillance measures is inextricably linked to the effectiveness of remedies before the courts and hence to the existence of effective safeguards against [abuse]” (see Roman Zakharov v. Russia [GC], § 234). This certainly applies in the AI context as well.

It is essential for access to justice that individuals can easily obtain information on when and how their government uses AI. When a decision in an individual case is taken by an AI system without human intervention or by a human relying on output data from such a system, the individual should be notified of this in the decision.

It is essential for access to justice that individuals can easily obtain information on when and how their government uses AI

Notification gives the individual a valuable opportunity to seek further information on the AI being used and to obtain appropriate legal and technical advice on appealing the decision and challenging errors, flaws or biases.

In case of automated decisions, notification is also a prerequisite for the individual’s possibility to exercise the right to obtain human intervention.

As for the type of information needed, details about the AI system is not always required. Sometimes, details of the algorithms and data structures may, however, be essential to making one’s case – information which is not always easy to come by.

A regular objection seems to be that the software is protected as proprietary trade secrets and thus shielded from review. To counter this objection, one might draw from the US experience, where procedural due process arguments have proven adept at dismantling such claims.

Another obstacle to acquiring more detailed information arises when the AI system in question operates as a so-called “black box”. These systems are designed in such a way that it is not discernible how inputs become outputs. That is, of course, highly problematic since it limits the possibilities of verifying, among other things, the legality and proportionality of a particular interference caused by such systems.

To circumvent the issue, the information deficit should be viewed not as an obstacle to challenging the fundamental rights implications of a “black box” system, but rather as the basis on which to challenge it.

An Ombudsperson for AI?

Another approach to increasing access to justice is to put in place alternatives to expensive and complex court proceedings. To that end, states should consider establishing a national ombudsperson for AI with which individuals could lodge complaints without cost – or, in the alternative, embedding such a specialised function within an existing body.

States should consider establishing a national ombudsperson for AI with which individuals could lodge complaints without cost

In either case, such a remedy would serve as an accessible alternative to court proceeding and the specialisation would also be conducive to ensuring that complaints can be reviewed by someone with the necessary technical expertise.

As for the powers that should be afforded to the ombudsperson, one can again find guidance from cases on secret surveillance. In this context, the ECtHR has laid down a requirement that the body in charge of overseeing state surveillance activities should have the power to “stop or remedy the detected breaches of law and to bring those responsible to account.” This should similarly apply to the ombudsperson for AI.

Moreover, if the ombudsperson finds that an AI system has breached an individual’s fundamental rights, it should ideally be able to order the suspension of further use of the impugned system until appropriate safeguards against further breaches have been put in place.

Further Action: Snowballing Efforts

The takeaways from the session outlined above can and should be further explored, developed and complemented.

In the coming years, additional legal and ethical lines will be drawn with respect to AI; policies will be adopted and legislation will be passed. It is essential that measures to increase access to justice are an integral part of these efforts.

In the end, our session concluded with a sense that while a lot can be done to increase access to justice in the context of AI, more focus on the issue is required. It is time to start snowballing our efforts.

For my part, I certainly hope to see and take part in more discussions, research and other initiatives focusing on this issue in the future. Only through careful consideration and decisive action can we make sure that justice eventually becomes open to all.

Alexander Ottosson is an associate at the Stockholm-based public interest law firm Centrum för rättvisa (Centre for Justice).

Image by Brett Sayles on Pexels

What Decolonising Digital Rights Looks Like

By Aurum Linh, 6th April 2020

Coloured bricks of lego

Decolonisation is core to all of our work as NGOs and non-profits. We are striving to create a future that is equitable and just. To do that, we need to dismantle the systems of racism, anti-blackness, and colonisation embedded in every aspect of our society.

This is a particularly urgent conversation to have in the digital rights field, given the belief that technology will liberate us from these biases. In reality, we can see that it is deepening these divides and automating these systems of oppression.

We can’t decolonise something if we don’t know what colonisation is. In per TEDx talk, “Pedagogy of the Decolonizing”, Quetzala Carson explains what colonisation is and how deeply it is embedded in nearly every aspect of our lives: “Colonisation is when a small group of people impose their own practices, norms and values. They take away resources and capacity from indigineous people, often through extreme violence and trauma.”

Quetzala goes on to explain that colonisers also bring their axiology, which is how things are quantified and how value and worth are prescribed to things. They impose their assessment of the value of the people, resources and land that become embedded in the institution that then creates the nation-state in settler colonialism. All of the established laws, policies, institutions, and governance structures are based on those beliefs that were brought upon contact.

They impose their assessment of the value of the people, resources and land that become embedded in the institution

How we conceive and transfer knowledge, as well as what knowledge we see as credible and valid (known as epistemology) are also based on these colonial beliefs. How we exist within these structures and how we interpret reality (known as ontology) is deeply influenced by colonisation as well. Axiology, epistemology, and ontology all come together for the state to create a narrative – to create how “normal” is defined.

This is why it’s so uncomfortable and painful to have these conversations – because these structures and beliefs have been embedded in our own hearts and minds. To have these beliefs challenged feels like an attack on our own being, but we have to remember that these beliefs were taught to us and are deeply embedded within us by design. 

Nikki Sanchez, an indigenous scholar, recommends that decolonisation is giving up social and economic power that disempowers, appropriates, and invisibilises others; dismantling racist and anti-black structures; dismantling the patriarchy; finding out how you benefit from the history of colonisation and activating strategies that allow you to use your privilege to dismantle that; and building and joining communities that work together to build more equitable and sustainable futures.

Decolonising must first happen within ourselves – decolonising our own hearts and minds

Decolonising must first happen within ourselves – decolonising our own hearts and minds. It is necessary to both actively combat and resist systems of oppression on the outside, but also within ourselves.

DFF held a strategy meeting in February of this year and there were two sessions on decolonisation (one of which I facilitated) that resulted in the following strategies being shared:

  • Unlearn and re-educate yourself.

  • Acknowledge your privilege and use it to dismantle the system from that position of power within the system.

  • Actively start conversations with people about privilege, decolonisation and anti-racist work.

At the organisational level, how can you give people tools to reflect and engage with this concept in a meaningful and critical way? How might we make it part of the culture of the organisation itself, embedded within every aspect of the organisation, as opposed to something that is considered an add-on or nice-to-have? How can decolonisation be the flour (vital to the recipe), and not the icing (an add-on)?

Culture is cultivated. Participants at the strategy meeting brainstormed a number of practical steps that could be taken at the organisational level to cultivate their decolonised culture. Some examples of the organisational measures suggested include:

  • Create a common language around decolonisation, and make publicly questioning the influence of biases and privilege part of your organisational culture.

  • Remember that this work is more than just hiring people of colour (PoC) and significant work is required of a mainly white organisation before bringing in someone of colour. Otherwise it could put that person in a position of having to educate others and endure traumatic conversations.

  • Learn what white fragility is, and be aware and conscious of when white fragility is arising in conversations.

  • Ask yourselves “are we the right people to be doing this work?” and “are we taking resources from other people or organisations that have been doing this work?”

  • Only put the necessary qualifications on job descriptions – women and PoC are less likely to apply to jobs if they don’t meet every qualification listed. Be conscious of this.

  • If no one on your team is part of the marginalised community you are working to protect, acknowledge that your organisation is coming from a place of allyship. Do not act like stakeholders when you are not part of the community that you are trying to protect and ask yourselves (again) “are we the right people to be doing this work?” and “how can we provide resources to the community members who are doing this work?”

  • Recognise your blind spots on issues of power asymmetries both within and between private and state actors.

  • Pay a liveable salary – people are often financially responsible for others (their parents, siblings, etc.) and can’t afford to live on a low income.

  • Avoid tokenism. Does everyone truly have a seat at the table or are some people there as (or made to feel like) figureheads for “diversity” purposes?

  • Consider what problems get solved first at your organisation. Who decides what? There is space here to rethink and/or dissolve structural hierarchies.

  • Set clear standards to cultivate inclusive meetings by design. For example, rules to prohibit interrupting others, creating space for pointing out problematic behaviour.

  • Restructure how you measure impact and work, and recognise “invisible work” like mentorship.

The effects of colonisation are deeply internalised in nearly every aspect of our waking lives

Colonisation is a collective history that connects us all. The effects of colonisation are deeply internalised in nearly every aspect of our waking lives. What is your personal role in this healing? What role can your organisation play in actively decolonising the digital rights space and beyond? Ultimately, all of these actions create a collective movement towards healing, justice, and dismantling systems of oppression. 

Aurum Linh is a technologist and product developer embedded as a Mozilla Fellow within the Digital Freedom Fund.

Image by Omar Flores on Unsplash