A digital future for everyone

By Nani Jansen Reventlow, 26th October 2019

On 26 October 2019, DFF Director Nani Jansen Reventlow delivered the lecture “An inclusive digital age” at the Brainwash Festival. This is a transcript of this talk.

 

Our digitised lives and the reproduction of power structures through technology

Our lives are increasingly digitised. We use technology to handle our finances: we pay for groceries using an electronic wallet, split the bill in a café or bar by beaming money to our friends and manage our bills via apps and other online solutions. We find a romantic partner by swiping left or right on an app, and increasingly access essential services such as healthcare, public transport and insurance with the help of technology. The line between what is “offline” and online –– to the extent the distinction ever really existed ­­–– is blurring to the point of invisibility.

While technology holds much promise to help us change things for the better by offering increased efficiency, precision, and good old-fashioned convenience, it also has the potential to exacerbate what is wrong in our societies. Technology can reproduce and even amplify the power structures present in our society, most notably when it comes to issues of ethnicity, gender, and ability. It also has the potential to amplify social and economic inequality, as we’ll consider in a moment. As technological developments are rapidly succeeding each other and the use of technology in our lives increases at high speed, this is something we need to examine carefully and respond to, before we reach a point of no return.

What kind of power structures should we think about? And how could they be reproduced by technology?

One example is the way Google search engine results reinforce racism. Professor Safiya Noble has written extensively about the way Google image searches for terms like “woman” or “girl” produce images that are for the most part thin, able-bodied and white. Here, as a preview to what we’ll talk about in a moment when we look at the cause of these problems, it is good to keep in mind that Google in 2018, the year Noble’s book “Algorithms of Oppression” was published, had a workforce that included only 1.2 percent women.

Another example is how CAPTCHAs – short for “completely automated public Turing test to tell computers and humans apart” – the tests you get on websites after ticking the “I am not a robot” box are making the internet increasingly inaccessible for disabled users. Artificial Intelligence learns from the way internet users solve these tests, which often have you select all images with traffic lights and storefronts, or make you type out a nonsensical set of warped letters. As the AI gets more sophisticated, the CAPTCHAs need to stay ahead of clever bots that can read text as well as humans can, which brings a huge disadvantage to those without perfect sight or hearing and who rely on exactly that type of technology to make the internet accessible to them, for example by using software that converts text into audio.

The list of examples is long –– and there is a talk by Caroline Criado Perez at this festival highlighting how design, including technological design, harms women specifically –– and at this stage the list covers just about every area of our lives where technology is employed without proper consideration.

At the organisation I founded, the Digital Freedom Fund, we say that “digital rights are human rights.” With this framing, we try to underline that the full spectrum of our human rights can be engaged in the digital sphere and that they need to be protected and safeguarded in that context. This means that when we talk about technology and human rights, we are talking about more than privacy and free speech alone. Other civil and political rights are just as important and relevant when looking at human rights and tech, such as the right to a fair trial and the right to free association. But also economic, social and cultural rights are impacted by technology, such as the right of access to healthcare and the right to work.

The use of technology, human rights and discrimination: facial recognition as an example

So how does the use of technology impact our human rights and how can it enable discrimination? Let’s take a closer look at these questions, using the issue of facial recognition as an example.

Facial recognition is a technology that scans our faces, produces a biometric “map” of our features, and matches those maps against a database of people. It is used by both private and public bodies to verify people’s identities, identify people who need to be put under stricter surveillance by security services, and to find wanted criminals, vulnerable individuals, or missing persons.

The collecting and processing of biometric data obviously raises many privacy concerns. But: there are other human rights than just the right to privacy engaged when facial recognition technology is used. I can give you the following three examples:

The first example of a human right that is implicated when facial recognition technology is used is the presumption of innocence. Everyone has the right to be presumed innocent, until it has been proven that they were involved in a crime. When facial recognition technology is used, everyone gets monitored. Every single person who passes the camera’s eye will have their features recorded, analysed, and sometimes even stored. No distinction is made between people out for their weekend shopping and a potential criminal who is up to no good.

A second example of a human right that can be violated by the use of facial recognition is the right to be free from discrimination. Facial recognition software has proven to not work as precisely with non-white people and women, resulting in a much higher degree of false positives when running biometric data against a database then is the case for white, male subjects. This means that the software finds a match where there is none. So if the system is trying to find criminals, it will identify innocent people as criminals much more often when it concerns women and/or people of colour. The lack of precision this technology offers when it comes to non-white, non-male people, results in a negative effect for anyone who is not male and white: they run a significantly higher risk to be mistaken for someone else, including someone wanted by law enforcement for a crime.

The third and last example of a right that facial recognition technology can harm is the right to freedom of assembly and association. This includes our right to gather in the streets and express our views on issues we find important, such as climate change, women’s rights, or the way our government is dealing with migrants. Facial recognition technology is usually deployed at public protests and demonstrations, meaning that everyone participating runs the risk of being captured and ending up in a database. This can be a deterrent for attending protests in the first place, which in turn has a negative impact on our democracy.

How does technology enable unequal treatment?

An important question to ask is: how did we get here? Technology is often presented as the perfect solution to so many of our problems, but the examples we looked at so far illustrate that much of the tech that is supposed to assist us has significant problems and can actually have a negative impact on our lives by trampling on our hard-won human rights. How is this possible?

First, technology is mostly built by white men. In her book “Weapons of Math Destruction”, mathematician and author Cathy O’Neil says that “A model’s blind spots reflect the judgments and priorities of its creators.” In other words: the way technology is coded reflects the assumptions of the people designing the technology. Most of the technology we encounter in our daily lives comes from Silicon Valley in the United States, where the majority of the big tech companies are based. If you imagine that analysis of 177 Silicon Valley companies by investigative journalism website Reveal showed that ten large technology companies in Silicon Valley did not employ a single black woman in 2018, three had no black employees at all, and six did not have a single female executive, it is perhaps not so difficult to imagine that facial recognition software has difficulty working as precisely with non-white, non-males as it does with people who resemble the majority of the world’s tech industry. This idea of designer bias is illustrated when looking at facial recognition software that is created outside the Silicon Valley bubble: facial recognition software built in Asia is much better at recognising Asian faces.

Second, technology is built and trained on data that can already contain bias or discrimination. For example, the asymmetry in the United States justice system is well documented. A survey of data from the U.S. Sentencing Commission in 2017 found that when black men and white men commit the same crime, black men on average receive a sentence that is almost 20 percent longer, even when the research controlled for variables such as age and criminal history. A 2008 analysis found that black defendants with multiple prior convictions are 28 percent more likely to be charged as “habitual offenders” than white defendants with similar criminal records. Innocent black people are also 3.5 times more likely than white people to be wrongly convicted of sexual assault and 12 times more likely to be wrongly convicted of drug crimes. These are just a few examples –– the list of research results similar to this goes on and on.

If you then use those data to develop and train, say, software that helps judges to determine sentences for defendants, it is not surprising that this software will be geared towards replicating those historical data. Software based on data from an intrinsically racist criminal justice system, will provide sentencing recommendations that reflect that racism.

This brings us to a third factor, which is that of feedback loops. Systems are rewarded for reflecting and reinforcing existing datasets, which in turn reflect societal bias and a practice of discrimination. An example of how this manifests itself is performance reviews. What if an algorithm decided if you were good at your job? Unfortunately, this is not a hypothetical question: a school in Houston, the United States used software to determine how well its teachers were performing. What happens, is that teachers who receive a “bad” score get dismissed from their jobs. This information makes the system assume that it got its assessment right, which entrenches the values and objectives that led to the original “bad” classification. This further strengthens the system into making similar decisions in the future.

Fourth, in spite of the issues we just looked at, there is an assumption that technology is “neutral”: the computer is considered as smart, so it must be right. This faith in technology and its flawlessness means that there is little to no second-guessing when the “computer says no”. Or when the sentencing software recommends a stiff prison sentence for a black first-time offender of a small misdemeanor. A lack of awareness of the fallibility of technology leads us to over-rely on it, rather than seeing it as one of the many tools at our disposal that we can use to reach a balanced outcome.

Another factor is the wish of engineers to see technology as something neutral, something that is stand-alone and not part of the political and social systems and structures it operates in. Bill Gates famously said that “technology is just a tool”, emphasising that it’s mostly what we do with it that matters. This is too narrow a vision. Rather than a lack of awareness, I would refer to this as a responsibility deficit: the technology we design and how we design it has real life implications so insisting anything else is no less than an act of willful ignorance and defiance. We’ll look at some of the options for addressing this responsibility vacuum in a moment.

Finally, how and where we deploy technology is a choice. And this choice is often made to the detriment of those groups in our society that are the most vulnerable. In her book “Automating Inequality”, which examines how the use of automated decision-making in social service programs creates a “digital poorhouse”, Virginia Eubanks makes a compelling argument that technology is usually rolled out against the most vulnerable groups in society before being applied to the public at large. The United Nations Special Rapporteur on extreme poverty and human rights, Philip Alston, has similarly observed that the poor are often the “testing ground” for the government’s introduction of new technologies. What we have here is a double bind: the decision who to deploy the technology towards in and of itself is discriminatory and this discrimination then gets exacerbated by the use of technology with the types of flaws we’ve just looked at.

Fighting for our digital rights and challenging discrimination: what can be done?

Are we lost? Should we give up as a society and leave it to the robots to take over? No, not just yet. There are a number of things that can be done on an industry, state and individual level to push back and challenge the negative impacts that technology can have on the goal of having an equal, equitable society.

To start with industry: a number of the problems flagged above lie squarely in the hands of those designing our technology to improve: designer bias, the uncritical use of flawed data sets, and a failure to engage with the connection between tech and our societies. To create a better design climate, industry needs to reflect on not just their hiring practices –– i.e. not hiring mostly white men –– but also change their organisational culture and team composition at a more structural level. For example, design teams could include experts from social sciences and human rights experts instead of just engineers.

This requires a fundamental shift in how the industry operates and a principled departure from the adagio “move fast and break things”, the classic startup motto that has remained alive and well in big tech over the past decades. Stringent and actually implemented ethics frameworks should prevent companies from building evil products: an audit for potential human rights risks should take place throughout the development process and mandate correction for negative human rights impacts.

Unfortunately, the current trend is that responses to any negative human rights effects are only addressed after a company comes under fire for a piece of bad technology.  

One example is that of HireVue, which offers software solutions for recruitment and allows big companies such as Nike, Unilever, and Goldman Sachs to conduct a first round of job interviews without any human involved: applicants answer questions on camera and HireVue’s software analyses and rates candidates based on what people say, how they say it, and their body language. A reporter who analysed the software, however, discovered that the system’s rating reflected previous hiring decisions of managers, which have been proven to generally favour white, male candidates. Following a critical piece in Business Insider about this issue, HireVue published a statement explaining how it “carefully tests for potential bias against specific groups before, during, and after the development of a model”.

Companies should do this right from the beginning and not only when responding to a scandal; this is not regular practice at the moment. Though there are some positive examples as well, such as Amazon scrapping the use of a recruitment algorithm that turned out to be down-ranking women applicants while the programme was still in its pilot phase.

Of course there is an important role to play for governments as well, which brings us to a second course of action. No country should allow anyone to build or use evil systems or to use bad technology themselves. Most countries have a decent framework to decide what type of work and equipment they will buy from industry and what not: a set of procurement rules and similar laws is generally present in a country that appreciates the rule of law. A starting point is making sure these existing laws and standards are applied in the context of technology as well. Transparency and procurement standards exist and they should be adhered to whether government is looking to build a new parking lot or buy software to assist the administration of social welfare.

In a perfect world, Human Rights Impact Assessments –– an assessment to determine if and how the use of a certain piece of technology would have an effect on human rights ­–– would be conducted for anything the State buys or uses. This, in turn, would also help compel companies to implement the practice. We are not quite there yet, but some positive signs can be seen across the globe. For example, a number of cities in the United States – San Francisco, Oakland and Somerville – have banned the use of facial recognition software, citing concerns of having their cities turn into a “police state”. The San Francisco ordinance imposing the ban also stipulates that all new surveillance equipment should be approved by city leaders, “to verify that mandated civil rights and civil liberties safeguards have been strictly adhered to.”

Third, where we know that our rights are being violated, we need to seek justice, through the courts or political action. The courts are important guarantors of our rights and freedoms, and that does not change with new advances in technology. In fact, one of the most important things, as technology advances, is to bring cases that help courts understand the implications for human rights. This means involving courts and judges in the debate and not leaving them behind.

We have already seen the courts take steps to safeguard people against the potential harms that can flow from biased and discriminatory technologies. In the case of State v. Loomis, the Supreme Court of Wisconsin said that a system called COMPAS, which gave assessment scores for recidivism, so the rate at which offenders commit another crime, should be accompanied by disclaimers. Whether or not a defendant is likely to commit another crime can be an important factor for judges in determining what type of sentence to impose and for what duration. The algorithmic COMPAS system seemed like a wonderful aide in determining this likelihood, making such assessments more accurate and the judges’ lives easier. However, as analysis by the investigative journalism website ProPublica demonstrated, COMPAS flagged black defendants almost twice as likely to be at higher risk of committing another crime, while they would not go on to do so. On the other hand, white defendants were much more likely than black defendants to be labelled as lower risk, but would actually go on to commit other crimes.

Mr Loomis, who had been charged with driving a stolen vehicle and fleeing from police, challenged the use of the COMPAS score at his sentencing, arguing this was a violation of his due process rights. While his case did not lead to an all-out dismissal of the use of the system, it did send a clear signal that the technology should not be taken at face value and that it had limitations.

Last, but definitely not least, you and I, all of us, can fight back when our human rights are being violated. We need to demand transparency on when and how technology is being used in contexts where it can have an impact on our human rights. Only if we are able to know when which type of technology is used and how in the many processes that deeply impact our lives –– healthcare, finances, employment, law enforcement: you name it –– can we challenge unfairness that results from it. A lot of this has to do with what we just discussed on transparency and procurement standards, but we should never forget that we, as citizens, as individuals, have leverage here, too, and we need to make our voice heard and clearly state our demands. We need to spur our elected representatives into action and, if need be, take our case to court and fight it to the highest level.

We are only at the beginning of what by all means should and must be the next frontier of our fight for equal rights, and for equal and equitable treatment before the law in all aspects of our lives.

Articulating litigation goals for challenging the collection of biometric data

By Alan Dahi, 24th October 2019

There were many productive sessions at the two-day meeting in Berlin on unlocking the strategic litigation opportunities of the GDPR, hosted by DFF in September 2019. One of these was on articulating litigation goals to challenge the collection of biometric data.

The GDPR defines biometric data as “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data” [emphasis added]. The GDPR sees biometric data as a special category of personal data, which enjoys particularly strict safeguards when it comes to its use.

At first glance, it is perhaps surprising that only such personal data “resulting from” certain technical processing is covered by the definition and not the underlying personal data itself. However, this approach can be understood against the background that photographs would otherwise qualify as biometric data. The special safeguards put in place by the GDPR against the unlawful use of “special categories of personal data”, such as biometric data, would undermine the societally established – and typically accepted – widespread use of photographs.

While acknowledging that the GDPR permits the processing of biometric data, the session in September considered which uses of biometric data should be acceptable from a societal or moral perspective, independently from the current legislative reality. The opinions ranged from its use never being acceptable (at least until the societal ramifications are better understood and placed on a solid legal basis) to a more differentiated approach depending on the type of biometric data, the intended purpose, and the involved parties.

Ultimately, and in light of the current legislative framework that permits the limited use of biometric data, the group focused on evaluating the potential privacy harms the use of different types of biometric data may have across various scenarios.

The session came to the conclusion that biometric data based on facial recognition deserves a particular focus. This is because faces are, typically, readily visible; their collection for biometric purposes is very easy. Indeed, the collection of facial features can typically be done without the affected individual’s knowledge or awareness. Moreover, there are few practical ways for an individual to prevent such collection. This would be in contrast to the more physically intrusive collection of fingerprints, which generally requires the co-operation of the individual, and against which an individual would be in a better position to protect themselves or challenge the data processing.

Considering different scenarios surrounding the use of biometric data, the overall consensus was that the forced provision/collection of biometric data is generally unacceptable, particularly with regard to access to government and private services, as well as when it comes to the biometric data of children.

The session, which consisted of participants from five different organisations, ended with a clearer understanding of how to articulate litigation goals to challenge the collection of biometric data and with a practical road map on how to put these goals into action.

About the author: Alan Dahi is a data protection lawyer at noyb.

DFF and OXIL to dive into the future of digital rights at MozFest

By Stacie Hoffmann, 21st October 2019

Events over the past few years have highlighted the extent to which technology is becoming seamlessly integrated into our daily lives. The emergence of self-driving cars, automated systems, and everyday items supported by the Internet of Things (IoT) illustrate this and examples of the impact this can have when things go wrong range from the Cambridge Analytica fallout to a number of attacks on our internet infrastructure.

The data, networks, and systems that underpin these technological advances are already impacting our digital rights. But, in the future, what might these rights look like if, for example, algorithmic “black boxes” are the tools that govern the world around us? How do we protect our rights and those of the most vulnerable in our society?

While many of us are engaged in fighting today’s battles for digital rights – and preparing for tomorrow’s – the Digital Freedom Fund (DFF) and Oxford Information Labs (OXIL) are mapping issues that digital rights defenders might come up against in five to ten years’ time and the potential strategies that could be adopted now to protect our digital rights in the future.

In September 2018, DFF hosted a “Future-proofing our Digital Rights” workshop in Berlin. The workshop resulted in a number of insights that were summarised in a post-event blog series (you can read posts in the series here, here, here, here and here), alongside which DFF published a set of essays that examined specific future risks to our digital rights and what we might do now to prepare ourselves for them.

Ingrida Milkaite and Eva Lievens (Ghent University, Belgium) looked at the risks posed to children with the increasing collection of their personal data through smart devices that are specifically created for them – the “Internet of Toys.” They noted that we could be pushing authorities to issue clearer guidance on how data protection law could be used to safeguard children’s rights. Sheetal Kumar (Global Partners Digital) explored the expected rise in IoT devices – with a forecast of 30 billion connected devices by 2023 – and the specific vulnerabilities this will expose individuals to when it comes to government hacking and cyberattacks. She suggested that civil society could document government agency hacking and the legal frameworks used to justify these actions, and observed that global norms could be relied on to limit the risks posed by the rise of IoT.

Steve Song (Mozilla Fellow) discussed the potential of a “splinternet” resulting from not only government initiatives (i.e. China’s Great Firewall), but also companies’ ownership of the physical infrastructure underpinning the internet. Song noted that existing laws around competition, consumer rights, and data protection could be leveraged to secure a robust marketplace instead of a “splinternet” monopolised by large platforms. Stacie Hoffmann (OXIL) highlighted the number of evolving digital divides, affected not only by access to technologies but their policy environments, and noted that what gets measured gets done. This was a call for  meaningful data collection that can be used to support data-driven policy to prevent growing digital divides alongside the need to support digital skills capacity across society. Iris Lapinski, now the Chairwoman of Apps for Good, discussed the future scenario where artificial intelligence decides our rights. She considered three stages to managing the changes and challenges presented by this dystopian future.

A year since we held these conversations, a number of these future threats have increasingly become a reality. In order to help prepare ourselves for those challenges that are still on the horizon, we would like to continue the conversation about what digital rights threats we may be facing in five to ten years’ time and what steps we could be taking now to get ready to fight or protect ourselves against them.

DFF and OXIL are very excited to bring this conversation to MozFest in London this week. The Future-Proofing our Digital Rights session will be held on Sunday 27 October 2019, between 11:00am and 12:00pm, at Ravensbourne University (Room 804 – Level 8). Anyone interested in looking ahead at the opportunities, threats and challenges related to digital rights and discussing how we can prepare ourselves for the future is welcome to join. The workshop will be interactive, and all areas of expertise and interests are welcome. We hope to see you there.

For those who cannot make it to MozFest, we plan to share some of the issues and discussions that emerge during the session in a blog series over the coming months. If you would like to share your own views in a blog post in this series, please do get in touch!

About the author: Stacie Hoffmann is a cyber security and policy expert at Oxford Information Labs who works at the intersection of technology and policy.