Envisioning a Decolonised Digital Rights Field – and Chartering Next Steps

By Nani Jansen Reventlow, 5th December 2020

How do we create change? Numerous books, essays and TED talks have been dedicated to this question, and courses have been designed to equip us to change policy, workplace environments, and numerous other contexts.

A crucial ingredient seems to be having a vision of what that change should look like: what is the point on the horizon to set your compass towards?

This “ability to dream” and collectively envision a different future has also been a key question since DFF and EDRi started an initiative to set in motion a decolonising process for the digital rights field.

If together we had built a digital rights field in which all groups in society have their voices heard and which works to protect the digital rights of all, what would that look like? And what could such a decolonised field achieve?

This week, a group of 30 participants, working on issues of racial, social and economic justice, digital rights, and in philanthropy, came together to not only collectively imagine just that, but also to identify the building blocks for a process that might help us get there.

The gathering followed a series of conversations that commenced in March this year, where we started by speaking to organisations, collectives, activists, and others currently outside the digital rights field to understand how they engage with digital rights issues.

From the summer, we initiated similar conversations with digital rights organisations and funders, to learn more about the way in which they engaged with the digital rights of marginalised groups, such as people of colour, LGBTQI people, disabled people, or refugees.

“Blue sky visions” are not easy: we are used to seeing obstacles and challenges in the work we do. Stepping away from that to imagine something positive, without practical constraints is hard. It is especially hard in an online setting, even more so towards the end of a challenging year in which we’ve all had a few video calls too many.

However, with the positive energy present in the virtual room (nurtured by the excellent facilitation from The Hum), the group managed to step away from current constraints and imagine the headlines that would dominate the news in 2040.

…the group managed to step away from current constraints and imagine the headlines that would dominate the news in 2040

The future looked bright: it was one in which visitors to the Silicon Valley Mausoleum could hardly imagine a past in which tech oligarchy had been the norm; digital rights, anti-racist, social justice, and climate movements were intertwined and working together; and we had shifted power structures from a system of capitalism to community. Social problems were addressed at the root and central to the field’s efforts, which were supported with ample resources. Other common themes were learning and commemoration, acknowledging that oppression has its roots in a history of domination and colonisation.

So how can we get to this  collectively imagined future? In the course of the following sessions, we took a closer look at parts of his question.

What are the building blocks for developing a decolonising programme for the digital rights field? What would the shared principles of such a collective effort be – to ensure participation, ownership and engagement in safety and collaboration? What do we need to do before starting this work, what do we need to map, research, know? What are the potential obstacles we might encounter and how can we address them? And: which other fields or factors should be ignited to undertake decolonising processes of their own?

What would the shared principles of such a collective effort be – to ensure participation, ownership and engagement in safety and collaboration?

Conversations on what should be part of the “design phase” –– the stage in which we collectively design a decolonising programme for the digital rights field (see illustration below) –– were incredibly rich, yielding many practical suggestions as well as deeper questions for further reflection. 

Our next task is now to harness the image of the design phase that has emerged from this week’s gathering, the over 50 individual conversations we had over the past months, as well as learnings from other decolonising processes. This includes taking into account some of the deeper questions to reflect on and other preparation needed before starting with this next phase of the work.

We operate in a difficult and adverse context where power imbalances and inequalities are growing. We will not get there alone

We operate in a difficult and adverse context where power imbalances and inequalities are growing. We will not get there alone. Social change is hard work. But this gathering has left us energised and even more motivated to work towards our collective vision.

We are deeply grateful to everyone who made the time at the end of a challenging year to engage with us and each other on these challenging questions in such a kind and open manner. We are especially grateful to those whose personal identity is at the center of this conversation, considering the energy it requires.

Claire Fernandez is the Executive Director of EDRi, an association of civil and human rights organisations from across Europe that defends rights and freedoms in the digital environment.

Supporting Long-Term Impact: Announcing Changes to DFF’s Grantmaking

By Nani Jansen Reventlow, 29th November 2020

DFF is coming to the end of its first 3-year strategy cycle, what we informally have referred to as the “pilot phase”. The end of our pilot phase also means the closure of our current grantmaking process.

DFF will not accept any further grant applications in 2020 as we are working to finalise a new, revised process, which we are looking forward to launching in early 2021.

DFF has changed significantly since we were first introduced to the world in October 2017. Based on input from the digital rights field, we established a strategy and priorities for our funding, we launched a grantmaking process, coordinated annual strategy meetings, facilitated strategic litigation retreats, and held field building workshops on topics like the GDPR, algorithm use, and competition law.

We were very happy that a recent external evaluation concluded these activities are seen as adding important value to the digital rights field.

While we were able to take important first steps in supporting the digital rights field during the pilot phase, there are areas where we can do more to make sure we are best serving its needs. One area of particular importance is our grantmaking.

The current grantmaking process was launched in July 2018, after being developed in dialogue with the digital rights field. By the end of 2020, DFF will have approved more than 40 grants, worth a total of over €1.5 million, supporting the litigation and pre-litigation research projects of 30 different organisations and individuals across Europe. Many of these projects are detailed on our case study page and in our annual report, with more to be published over the coming months.

By the end of 2020, DFF will have approved more than 40 grants, worth a total of over €1.5 million

Building on the participatory approach taken in developing the initial grantmaking process, DFF has continued to revise its grantmaking throughout the pilot phase. We actively sought feedback through regular surveys, outreach, and conversations with applicants and grantees. In response to questions about the scope of our grants or what we expected to see in applications, we published guides and a frequently asked questions page to help organisations prepare their applications.

When we received feedback that the application process took too long, we developed a “fast-track” application process for grantees moving from one grant to another. We have also sought to add value in other ways: for example, in 2019, we developed a new framework to better capture the outcomes and impacts of strategic litigation.

We are proud of the many great projects we have been able to support over the last three years, and hope that the efforts we made to improve our processes during that time managed to address at least a good part of the needs of our grantees. But: there are some major limitations we can only overcome by changing the scope of the grantmaking process itself.

There are some major limitations we can only overcome by changing the scope of the grantmaking process itself

For example, many organisations have requested support for adverse costs – costs to cover possible court orders to pay the fees of the opposing party in the case of a loss – noting these costs are a major barrier to taking public interest litigation. While recognising that this was a welcome type of support, we also wanted to do justice to the complexity of the issue. How do you help litigators absorb the negative impact an adverse cost order might have on their operations, but without disincentivising courts to take the public interest into account and without, through funding support, incentivising frivolous litigation – the very thing cost orders are supposed to discourage?

We also wanted to make sure we adopted an approach that would be equitable across all of the geographical region DFF serves, and not just certain jurisdictions. Following detailed research and consultations, from 2021, applicants will be able to include adverse costs in their grant applications under certain conditions. Our new policy seeks to maintain a balance between helping to mitigate the impact of cost orders on digital rights litigators, while making sure DFF doesn’t unintentionally encourage the practice itself.

…from 2021, applicants will be able to include adverse costs in their grant applications under certain conditions

The biggest limitation of DFF grantmaking during the pilot phase was our inability to support long-term projects over multiple litigation instances. DFF was established to provide grants supporting strategic litigation, i.e. litigation that has an impact beyond the parties involved in the case and that leads to legal, policy or social change. However, this kind of impact often takes more than one instance of litigation to achieve. While providing this type of support was envisaged from the very early stages of DFF’s development, DFF being a young organisation made this impossible to implement from the start.

We are working to change this in 2021, when we hope to launch a new grantmaking process allowing organisations to apply for grants that support multiple instances of litigation. We are in final discussions to ensure the process carefully balances the demands of the field with operational constraints, and gets support from our funders and the DFF Board.

…we hope to launch a new grantmaking process allowing organisations to apply for grants that support multiple instances of litigation

The new process will also build on the lessons learned from the COVID-19 Litigation Fund, which was our first time providing grants of this nature. We hope the new process will allow applicants to more effectively plan their cases over a long period, with the confidence that they will be able to see a case through to the highest level necessary, and also provide an incentive to invest in building long-term strategies, coalitions and campaigns with other partners.

The end of the current grantmaking process will not affect current grantees, whose projects will carry on until they are completed. DFF will continue to move the small number of remaining active applications through our current process, with final decisions to be made before the end of 2020.

Look out for more details from us later in the year and please get in touch if you have questions or comments in the meantime –– as always, we welcome your feedback and input. We look forward to supporting more strategic litigation efforts to advance digital rights across Europe in 2021 and beyond.  

Decolonising Digital Rights: Why It Matters and Where Do We Start?

By Nani Jansen Reventlow, 23rd October 2020

This speech was given by DFF director, Nani Jansen Reventlow, on 9 October as the keynote for the 2020 Anthropology + Technology Conference.

The power structures underlying centuries of exploitation by one group of another are still here.

Besides the fact that we, in reality, still have over 60 colonised territories around the world today, maintained by 8 countries (though the UN General Assembly would disagree with that number), colonisation has taken on many different forms, including in and through technology.

What does this mean for our societies? What would things look like if they were different? How do we get there –– or: how do you decolonise society? How do you decolonise technology? And how do you decolonise digital rights?

I will start this talk with a spoiler: I will not be able to provide you with an answer to these fundamental questions. What I will try to do in the next half hour, is tell you something about the problems we at the Digital Freedom Fund are seeing in Europe when it comes to digital rights and what is often euphemistically referred to as “diversity and inclusion“.

I will also tell you about what we are doing to try to set in motion a process to fundamentally change the power structures in the field that works on protecting our human rights in the digital context.

But before we get there, we first need to take a look at what the problem is and why we have it.

Sketchnote by Gunjan Singh

What’s the problem?

Today’s conference is centred around “championing socially responsible AI,” but: what does this mean? From a human rights perspective, many AI-related digital rights conversations tend to focus on the right to privacy and data protection. In doing so, these conversations often miss the full extent of the social impact new technologies can have on human rights. This is one of the reasons why it is so important that we decolonise the digital rights space and encourage an intersectional approach to AI and human rights issues.

Let me illustrate some of the issues in the context of the themes of this conference, starting with:

Health tech

The implications of health tech on individuals is a prominent conversation at the moment as we are facing the COVID-19 pandemic.

Big Data solutionism is pervading coronavirus responses across the globe, with contact tracing, symptom checking, and immunity passport apps being rolled out at rapid speed.

These technologies are often not properly tried and tested, and it is clear that privacy and data protection often has not been front and centre for those developing the technology, let alone other human rights considerations. They also illustrate the degree to which our technology and the way we deploy it is colonised.

The UK, for example, launched a “Test and Trace” system in England and Wales in May this year, without, it later admitted, having properly conducted a data protection impact assessment. This admission came after the NGO Open Rights Group had threatened legal action. The heavily criticised app was abandoned and, since late September, a new one is available, which addresses some of the privacy concerns previously raised.

But: is privacy the only issue we should be looking at when considering the viability of using tracing apps to combat a public health emergency? Limiting our analysis to privacy and data protection alone results in blind spots on many of the broader issues at play, such as discrimination and access to healthcare. A few examples. 

To be able to download and use the app, you need to have a relatively new phone, with the right operating system installed. This means that those who are unable to afford or don’t have direct access to technology are excluded.

There also is an assumption that each user would be uniquely linked to a phone. And of course, in order to download the app and receive warning and notifications, you will have to be online.

To put it simply: the effectiveness of the app is based on an assumption that the “average person” in society is the exclusive owner of a new smartphone with reliable access to the internet. Researchers at Oxford University have estimated that more than half the population of a country would need to make use of a tracing app in order for it to be effective.

This raises the question: what happens to the “less than half” of the rest of the population that does not or cannot make use of the app, and how does the automation of disease control affect their vulnerability? Should we want to use an app at all if it is not effective for the protection of everyone in our society?

This raises the question: what happens to the “less than half” of the rest of the population that does not or cannot make use of the app, and how does the automation of disease control affect their vulnerability?

Concerns around health data, however, are not new to the COVID-19 context. The UN Special Rapporteur on privacy has recognised that medical data is of “high value” for purposes such as social security, labour, and business. This means that stakeholders such as insurance companies and employers have a considerable interest in health-related data.

Many health services are built on the values of trust and confidentiality. But as more and more actors move into the health data space, these values are fading more and more out of focus.

A failure to protect health data can engage the rights to life, social protection, healthcare, work, and non-discrimination. Very concretely, it may deter individuals from seeking diagnosis or treatment, which in turn undermines efforts to prevent the spread of, say, a pandemic.

The access Home Office immigration officials are given to check entitlement to health services as part of the UK government’s “hostile environment” policy is a clear example of this, but even in “lower risk” settings, a person might think twice about getting medical help if the potential repercussions of sensitive health information ending up in unwanted hands are sufficiently grave.

FinTech

It is said that health cannot be bought, but with that wisdom in mind let us turn to the second conference theme, which is FinTech.

Here too, with the ever-increasing automation of financial services, it is not only the right to privacy that is under threat.

Access to financial services, such as banking and lending, can be a decisive factor in an individual’s ability to pursue their economic and social well-being. Access to credit helps marginalised communities exercise their economic, social, and cultural rights.

Access to financial services, such as banking and lending, can be a decisive factor in an individual’s ability to pursue their economic and social well-being

Muhammad Yunus, social entrepreneur and Nobel Peace Prize winner, has gone so far as saying that access to credit should be a human right in and of itself. “A homeless person should have the same right as a rich person to go to a bank and ask for a loan depending on what case he presents.”

In reality, automation in the financial sector often polices, discriminates, and excludes, thereby threatening the rights to non-discrimination, association, assembly, and expression; individuals may not want to associate with certain groups or express themselves in certain ways for fear of how it will impact their creditworthiness.

This policing, discrimination, and exclusion can also have an impact on the right to work, to an adequate standard of living, and the right to education.

Cathy O’Neil has noted that creditworthiness has become an “all-too-easy” stand in for other virtues. It is not just used as a proxy for responsibility and “smart decisions”, it is also a proxy for wealth. Wealth, in turn, is highly correlated with race.

While the FinTech narrative is that it works with “unbiased” scoring algorithms that are blind to characteristics such as gender, class, and ethnicity, research shows a different picture.

While the FinTech narrative is that it works with “unbiased” scoring algorithms that are blind to characteristics such as gender, class, and ethnicity, research shows a different picture

Many of these “modern” algorithms make their decisions based on historic data and decision patterns, which has led to the coining of the term “weblining,” to show how existing discriminatory practices that were operationalised in the US in the 1930s with the practice of “redlining” to keep African American families from moving into white neighbourhoods, are now replicated in new technology.

Those who can afford to, can hire consultants or go live in certain neighbourhoods to boost their credit scores. In the meantime, those living in poverty are refused loans, often on a discriminatory basis, and even targeted because of their credit scores for pay-day-loans and other online advertisements that can plunge them further into poverty.

This illustrates that this is more than a question of privacy: it is a question of livelihood.

It doesn’t stop there. Credit scores are even used to make decisions about a person outside of the financial services sector, such as hiring or promoting individuals at work. Conversely, other proxies are increasingly being used as stand ins for creditworthiness. O’Neil has explained how this has contributed to a “dangerous poverty cycle”.

With datasets recording nearly every aspect of our lives, and these data points being relied on to make decisions on us as employees, consumers, and clients, we are being labelled as targets or dispensables. Our occupations, preoccupations, salaries, property values, and purchase histories result in us being labelled as “lazy,” “worthless,” “unreliable”, or a risk, and that label can carry across many different aspects of our lives. 

Smart cities

I will be brief about smart cities because I actually don’t think the question of making cities “smarter” through technology is worth having unless we are talking about the need to completely reinvent the urban design that has historically and systemically harmed people of colour, people living in poverty, people with disabilities, and marginalised groups.

Smart city design runs the risk of automating and embedding assumptions on how we run and manage cities that are less about people, and more about profit and exclusion, as well as vague, unsubstantiated notions of “efficiency”.

Smart city design runs the risk of automating and embedding assumptions on how we run and manage cities that are less about people, and more about profit and exclusion

Smart city initiatives in India have led to mass forced evictions, and in the United States the question should be asked if smart city design to “reduce neighbourhood crime” isn’t a euphemism for increased surveillance, with all the racialised policing that comes with it.

The really interesting question here is how we can use AI to help fix these inequalities and harms, but that is not usually the focus of many of the current smart city debates.

Why do we have this problem?

We just looked at some of the manifestations of the problem of colonised technology. So: what causes us to have this problem in the first place?

First, there is a common and incorrect assumption that technology is neutral. However, the apps, algorithms, and services we design ingrain choices made by its creators. It replicates their preferences, their perceptions of what the “average user” is like, what this average user would want or should want to do with the technology. Their design choices are based on the designer’s world view and therefore also mirrors it.

As someone said to me the other day: “an algorithm is just an opinion in code.” When those designers are predominantly male, privileged, able-bodied, cis-gender, and white, and their views and opinions are being encoded, this poses serious problems for the rest of us.

This relates to the second cause, which is that Silicon Valley has a notorious brogrammer problem. When you look at any graph reflecting the makeup of Silicon Valley, where most of our technology here in Europe comes from –– and this is a problem in and of itself, as technology developed from a white, Western perspective is deployed around the world  –  this is easily visible.

For professions such as analysts, designers and engineers the numbers for Asian, Latina, and Black women decrease as role seniority increases

For professions such as analysts, designers and engineers the numbers for Asian, Latina, and Black women decrease as role seniority increases. Often to the point they literally become invisible on the graph because their numbers are so small. 

Analysis of 177 Silicon Valley companies by investigative journalism website Reveal showed that ten large technology companies in Silicon Valley did not employ a single black woman in 2018, three had no black employees at all, and six did not have a single female executive.

This should make it less surprising that, for example facial recognition software built by these companies is predominantly good at recognising white, male faces.

However, a predominantly male, white, and able-bodied workforce is not the only thing that factors into technological discrimination.

A third cause, which we already touched upon in the context of FinTech, is that technology is built and trained on data that can already reflect systemic bias or discrimination. If you then use those data to develop and train new software, it is not surprising that this software will be geared towards replicating those historical data. Technology based on data from a racist, sexist, classist, and ableist system, will provide outcomes that reflect that racism, sexism, classism, and ableism. Unless a conscious effort is made to get the system to make different choices, systems built on such data will replicate the historical preferences it has been fed.

Finally, we don’t work consistently with interdisciplinary design teams. Engineers will build technology to certain specifications, and those will have systemic biases baked into them; you can have all the non-male, non-white, non-ableist engineers you can think of, but it will not be up to them (alone) to solve the systemic problems.

If society is sexist, racist, and ableist, so will the AI it develops be. It is unhelpful to focus on the technology alone as it negates the political and societal systems in which it is developed and operates.

There is a role for social scientists and others at crucial stages of the design and decision-making process; developing suitable AI is not just a task for engineers and programmers

An interdisciplinary approach in developing AI systems is therefore crucial. There is a role for social scientists and others at crucial stages of the design and decision-making process; developing suitable AI is not just a task for engineers and programmers.

What can we do about it?

We obviously have a large-scale problem on our hands and the monster we created won’t easily be put back in its cage. There are, however, a number of things we can do, both in the shorter and longer term.

  • Push for moratoriums on new technologies until we understand their social impact, particularly on human rights. The call for a ban on the use of facial recognition technology has gained more traction following the Black Lives Matter protests, with several tech giants putting a hold on –– part of –– their products in this area. This should be a guiding principle: unless we know and understand in full what the human rights impact of new technology is, it should not be developed or used.

  • We also need to have the debate on where to draw the so-called red lines on where AI technology should not be used at all. This conversation needs to be people-centred; the individuals and communities whose rights are most likely to be violated by AI are those whose perspectives are most needed to make sure the red lines around AI are drawn in the right places.

  • When we analyse the impact of new technologies, this needs to be done from an intersectional perspective, not only on how it affects privacy and data protection. As we saw earlier, breaches of privacy are often at the root of other human rights violations. Companies and governments need to be held accountable for these violations and they need to work with multidisciplinary teams that include social scientists, academics, activists, campaigners, technologists, and others to prevent violations from occurring in the first place. This work also needs to be done closely and consistently with affected groups and individuals to understand the full extent of the impact of AI-driven technologies.

  • We need to push for enforcement and compliance with existing legislation protecting human rights. There often is a call for new regulation, but this seems to conveniently forget that we actually have existing international and national frameworks that set clear standards on how our human rights should be respected, protected, and fulfilled. This is also a healthy antidote to the fuzzy “ethics” debate companies would like us to have instead of focusing on how their practices can be made to adhere to human rights standards.

  • Finally, we need to not only decolonise the tech industry: we also need to decolonise the digital rights field. The individuals and institutions working to protect our human rights in the digital context clearly do not reflect the composition of our societies. This leaves us with a watchdog that has too many blind spots to properly serve its function for all the communities it is supposed to look out for.

At the outset, I referred to “diversity and inclusion” as a euphemism. I did that, because that is not enough to make the change that needs to happen.

Instead of focusing on token representation, we need to change the field on a structural level

Instead of focusing on token representation, which essentially treats the current status of the field as a pipeline problem, we need to change the field on a structural level; we need to change its systems and its power structures.

This is something that is fundamentally different from “including” those with disabilities, from   racialised groups, the LGBTQI+ community, and other marginalised groups in the existing, flawed ecosystem.

Here we return to the big question raised at the beginning of this talk: how do you decolonise a field? The task of re-imagining and rebuilding the digital rights field is clearly enormous. Especially since digital rights cover the scope of all human rights and therefore permeate all aspects of society, the field does not exist in isolation.

Here we return to the big question raised at the beginning of this talk: how do you decolonise a field?

We can therefore also not solve any of these issues in isolation either –– there are many moving parts, many of which will be beyond our reach to tackle, even if we at the Digital Freedom Fund are working with a wonderful partner in this, European Digital Rights (also known as EDRi).

But: we need to start somewhere, and we need to get the process started with urgency as technological developments will continue at a rapid pace and we need a proper watchdog to fight for our rights in the process.

So: what have we done so far.

We started earlier this year with a process of listening and learning. Over the past months, we have had conversations with over 30 individuals and organisations that we are currently not seeing in the room for conversations about digital rights in Europe.

Over the past months, we have had conversations with over 30 individuals and organisations that we are currently not seeing in the room for conversations about digital rights in Europe

We asked about their experience working on digital rights, what their experience has been working with digital rights organisations, and what a decolonised digital rights field might look like and what it might achieve.

We also started collecting and reading literature about decolonising technology and other fields, to start developing a possible joint vision we can work towards.

As these conversations continue, we are starting similar conversations with the digital rights field to learn what their experience has been working on racial and social justice issues, working with partners in this field, and also what their vision of a decolonised digital rights field might look like.

The next step is an online meeting in December of this year to (a) connect different stakeholders and (b) receive input on what the next step in the process, which we are referring to as a design phase, should look like. This design phase, which we hope to start next year, should be a multi-stakeholder effort to come to a proposal for a multi-year, robust programme to initiate a structural decolonising process for the field.

Much has changed since we first started talking about the need to decolonise the digital rights sector two years ago.

This is on the one hand encouraging and on the other hand the threat of “decoloniality” becoming yet another buzzword that people like to use but not practice looms large

The recent international Black Lives Matter protests have done a lot to boost awareness about systemic racism. This is on the one hand encouraging and on the other hand the threat of “decoloniality” becoming yet another buzzword that people like to use but not practice looms large.

The irony of our work now being of interest to many –– the media, policymakers, funders… –– now that it has been validated because the “white gaze” became captivated by racial justice protests amidst the boredom of a global lockdown, is also not lost on me.

That being said, the current mood does illustrate how necessary this work is, not only in the digital rights space, but everywhere in our society. And the more of these processes we can set in motion, the better a world we will be creating for all of us.