Talking Digital? Reflections on Losing and Loving Language

Talking Digital? Reflections on Losing and Loving Language

By Laurence Meyer, 24th January 2022

Two cards spelling AI with letters govering around them and illustrations of hands in the background

DFF’s own Laurence Meyer reflects on losing and loving language following the first Digital Rights for All workshop.

“The language, only the language. The language must be careful and must appear effortless. It must not sweat. It must suggest and be provocative at the same time. (…)  It’s a love, a passion. Its function is like a preacher’s: to make you stand up out of your seat, make you lose yourself and hear yourself. The worst of all possible things that could happen would be to lose that language.”

As always, Toni Morrison knows best. Language is not only what makes us able to communicate but also what gives us the power to create movement, allows us to organise, sustain solidarity, to fight back and build better.

But when it comes to digital rights issues, the language used often creates more exclusion than bridges. It feels opaque and made for a small group of experts only, who have been studying and working on it for decades. We have the feeling of losing language, of not being able to participate in the conversation; we actually have the frightening sense that we don’t know what the conversation is about, although it seems to concern us. Much like when we are facing the online form from a public body without understanding what is demanded of us.

It’s about us. We have to be part of the conversation, but we are not part of the conversation.

We actually have the frightening sense that we don’t know what the conversation is about, although it seems to concern us

Certain ways of formulating things sometimes transform everyday life issues – being oppressed, discriminated, categorised, surveilled while looking for a job or a home, going from one country to another, going to school, or walking in the street – into merely academic problems.

It also often leads us to concentrate more on how harmful technologies work, instead of what they do to people. Mobilising a technocentric language invites a focus on technocentric solutions – for example, when talking about “algorithmic bias” in facial recognition technologies or biometric surveillance instead of racist policing, wrongful arrest, forced gender re-assignment, or reckless endangerment of people crossing borders.

Often, because of the words used, it can seem that digital rights issues are too complicated and completely separated from what we know, or too distant from more pressing issues. It can seem too big to even begin to consider working on it. It’s akin to learning a new language with words describing a completely new reality after the age of 11: without very strong reasons or a lot of time at our disposal, why would we do that? The extensive use of acronyms such as CSO, GDPR or FRT can reinforce this feeling of alienation through language.

…it can seem that digital rights issues are too complicated and completely separated from what we know

The terminology used also carries its own ideological baggage. The notion of “artificial intelligence”, for example, tends to reduce intelligence to being able to make decisions based on a binary way of thinking: valid/invalid, correct/incorrect, right/wrong, a face/not a face. This understanding of intelligence is in part the fruit of a colonial history of disqualifying other forms of understanding and creating meaning that aren’t using Eurocentric norms of rationality. Forms of accessing knowledge through myths, songs, dance etc. – languages of love and emotions – are in mirror deemed purely cultural products or qualified as uncivilised.

This understanding of intelligence is in part the fruit of a colonial history of disqualifying other forms of understanding and creating meaning that aren’t using Eurocentric norms of rationality.

The other element of “artificial intelligence” is also not neutral in how it supports the invisibilisation of forms of exploitation.  Systems qualified as AI, for example, are not relying solely on artificial power (nothing is purely artificial) but are largely dependent on the work of precarious workers in the Global North and the Global South.

Julian Posada, a PhD candidate at the University of Toronto, currently works on how labour and artificial intelligence intersect both in the development of AI and in its deployment. In a Medium article called “A New AI Lexicon: Labor – Why AI Needs Ethics from Below”, he shares some of his findings doing ground research work in Latin America on the experiences of “workers who annotate data for AI through digital labor platforms”. The article exposes the exploitative dynamics at play, highlighting the precarious employment conditions and the opacity regarding how employees’ work is being used. He writes:

“To companies that run these platforms — often based in countries like the United States, Germany, and Australia — contractors are invisible, cheap, and receive little recognition. Yet these ‘ghost workers’ have fueled many of the recent advances in artificial intelligence. Machine learning developers often consider them a necessary burden because of the need for human labor to annotate data combined with the fear of “worker subjectivity” becoming embedded as biases in the datasets and, ultimately, the algorithms. This vision from above fails to see that human labor is not a cost but an asset. Data are never neutral, and therefore, data are never unbiased.”

There is a need to go beyond the modern myth pushed by big tech, which views technology as neutral and AI development as separate from social issues.

There is a need to go beyond the modern myth pushed by big tech, which views technology as neutral and AI development as separate from social issues.

During our first Digital Rights for All workshop, participants enjoyed both a rich presentation by Aparna Surendra on how algorithms worked, and an overview by Gracie Mae Bradley of how she works on “Integrating digital issues into social justice campaigning”.

One of the key points made during the conversations was around the necessity of never detaching technology from pre-existing practices. This means several things, such as the following: Placing digital questions within the broader context. Centring the historical and social roots and consequences of the problem. Decentring discussions on the type of technologies used (automated-decision making systems or not, big data etc.).

Doing so, many participants agreed, will allow us to build more sustainable coalitions and enable enriching exchanges when determining strategic goals.

This question of how we talk about tech is not disconnected from the use big tech might have of language, which harms us. We must ask ourselves: whose language and for whose profit?

In an article Sarah Chander sent me called A Body of Work That Cannot Be Ignored, J. Khadijah Abdurahman addresses the reasons of the firing of Timnit Gebru. She writes

“Her firing was the final step in management’s systematic silencing of her scholarship, which came to a head over a paper she co-authored called ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’ The paper offers a damning critique of the energy demands and ecological costs of the sorts of large language models that are core to Google’s business, as well as how those models reproduce white supremacy—and codify dominant languages as default while expediting the erasure of marginalized languages.”

Displacement is happening in a very literal manner due to the way in which language models used by big tech companies such as Google amplify language hegemony and linguistic legitimacy dynamics. We know that speaking one language is often perceived as more valuable than speaking 6 languages, if that one language is English and the six languages are Pulaar, Wolof, Lingala, Venda, Shimaore and Bambara. This is becoming truer as translation software still mostly concentrates on European hegemonic languages, thereby contributing to the structural undermining and erasure of entire ways of thinking, discussing, and sharing about digital technologies or else, via digital means. Losing language.

We know that speaking one language is often perceived as more valuable than speaking 6 languages, if that one language is English and the six languages are Pulaar, Wolof, Lingala, Venda, Shimaore and Bambara.

During one of the break-out rooms of the Talking Digital workshop called “If I were an Algorithm”, we also used creative writing methods to tackle the question of subjective perception when creating an algorithm. The participants in each break-out room had to decide on a task they all disliked doing, but found useful, and then determine separately what steps were necessary to complete the task and how to judge when it was completed in a satisfactory manner.

It allowed us to explore questions such as: who decides what type of algorithm would be useful? Who decides when the algorithm is functioning and when it malfunctions? And so on. Every algorithm is the result of a desire expressed by a person or a group of persons to not do a certain task anymore, a task that they nonetheless think should still be executed. The participants of the workshop, for example, wished for an algorithm that would make them coffee in the morning, grate cheese for them, or take care of their taxes.

If we think of algorithms as a world of desires, we might want to ask, whose desires are considered worthy and whose desires are being ignored?

If we think of algorithms as a world of desires, we might want to ask, whose desires are considered worthy and whose desires are being ignored?

This arguably goes beyond the question of biases. By understanding algorithms through the prism of affordance and affects, we move away from the idea that machines are objective, neutral and purely rational because they are free from emotions. Similarly, we must use language that allows us to recognise our shared humanity, our love and passions even when “talking digital”. This is not an easy task but is essential to successful organising for digital rights and justice.

Image by Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0