Lowering the threshold for strategic litigation on AI and human rights

By Nani Jansen Reventlow, 8th November 2019

DFF commenced its operations in 2017 by starting a strategy process. This process, which is ongoing to this day, consisted of a consultation of all key stakeholders working on digital rights in Europe, asking them what their priorities were and how DFF could best support them. Following DFF’s first strategy meeting in February 2018, this process led to the formulation of three thematic focus areas for DFF’s work. First, advancing individuals’ ability to exercise their right to privacy; second, protecting and promoting the free flow of information online; and third, ensuring accountability, transparency and the adherence to human rights standards in the use and design of technology.

The debate about “AI” – also framed as one on machine learning or automated decision making – has become a hot topic for discussion over the recent period. As foreshadowed by developments in 2018, AI and human rights was a much-debated topic at our 2019 strategy meeting, which brought together 48 organisations from across Europe working on digital rights.

As had been the case during the Virtual Strategy Design Jam we hosted on the use of AI in law enforcement in the runup to the strategy meeting, the topic was actively debated, but we did not see a corresponding uptake of the issue when it came to litigation. In other words: there was a clear sense of urgency to address the potential negative impact the use of AI could have on human rights, and an interest in pursuing litigation to address this, but not many cases were brought. Following the discussions at our strategy meeting closely and listening to other input from the field, it became clear that many litigators had difficulty identifying the issues on which to litigate and suitable entry points to do so.

The development and use of technology in all aspects of our lives meanwhile is increasing, making the need to confront and challenge any negative human rights impacts that result from it ever more urgent. Strategic litigation can be an important instrument in this fight. In light of this, DFF is seeking to help lower the threshold for litigators to step into this space and help safeguard our human rights when AI is at play.

This November, DFF is hosting a litigators’ meeting together with the AI Now Institute, building on their “Litigating Algorithms” series (see here and here for the meeting reports) which brought together litigators, academics and technologists to share experiences in litigation on the impact of the use of AI across a variety of different contexts. The meeting, which will be hosted at Mozilla’s Berlin office, will bring together US and European litigators with experience in challenging algorithmic decision making through the courts as well as those with an interest in doing so. Besides sharing best practices, participants will brainstorm new case ideas and identify concrete plans for next steps.

In October, DFF’s Legal Adviser joined forces with technologist Aurum Linh, a Mozilla Fellow, to work on a set of guides to help build stronger litigation on AI and human rights that can help set precedents that ensure greater transparency, accountability and adherence to human rights standards in the design and use of AI. The first guide will be aimed at demystifying litigation for digital rights activists, technologists and data scientists, who will often be at the forefront of identifying the situations and fact patterns that are ripe for AI-related human rights challenges through litigation. The second guide will be aimed at lawyers working across different practice areas – such as criminal, employment, immigration or election law – who could have clients whose rights have been violated by the development and use of AI. The guide will provide these legal practitioners with the minimum viable information necessary to effectively identify and pursue legal claims challenging human rights violations caused by AI. The guides will be developed through regular consultation with the intended audiences and organisations already looking at litigating on AI to ensure the resources meet their needs. Watch this space for updates and learn how you can join the conversation.

Both strands of activities will build on each other and weave into DFF’s ongoing dialogue with the field. Following the November “European litigating algorithms” meeting, a report will be published in early Spring to share lessons learned with the field. In February 2020, a dedicated consultation will be held to test the concepts of the litigation guides. This all will feed into the publication of the guides in the second half of 2020 and an international meeting for litigators to share experiences on litigating on this topic across different regions.

… and, we hope, many exciting cases! We look forward to supporting some of the exciting work that will be developed over the coming months and are always happy to hear from you and discuss your ideas.

Future-proofing our digital rights at MozFest

By Jonathan McCully, 29th October 2019

Over the weekend, the Mozilla Foundation held its tenth annual MozFest. The festival brought together educators, activists, technologists, researchers, artists, and young people to explore, discuss and debate how we can help secure “healthier Artificial Intelligence.”

On Sunday, the Digital Freedom Fund co-facilitated an interactive workshop with the Oxford Information Labs on “Future-Proofing our Digital Rights.” This hour-long workshop brought together lawyers, academics, technologists and digital rights activists to consider the digital rights battles that may lie ahead of us and what steps we can take in the short and medium term to help steer us towards a future we would like to see.

The session centred around three questions: what digital rights threats or challenges might we see in five years’ time? In our ideal future, what do we want to see in five years’ time? And what actions should we take now to help mitigate the threats and challenges, and steer us towards our ideal future?

Participants identified a range of horizon threats and challenges, from increased uptake of Artificial Intelligence and greater deployment of biometric detection systems, to a growing digital divide and political participation being conditioned on the giving away of personal data. A number of groups converged on the threat that tech companies become the lawmakers, and increasingly write the rules by which they are to be regulated. The disempowerment of users by moving digital rights issues out of the public law and into the private law context would make it harder to obtain redress for digital rights violations.

When talking about ideal futures, participants identified a number of scenarios, including the banning of facial recognition technologies, having a diversity of online platforms with no dominant players, securing proactive and well-resourced regulators that are not captured by industry, and  promoting “white boxes” instead of “black boxes” ­– systems designed with accountability, transparency and contestability in mind.

So, what can we do now to help us prepare for these future threats and put us on track towards a future that better protects our digital rights? One group discussed the role that the education sector could play in improving AI literacy and awareness. Another group talked about how we could push for law reform that ensures that publicly funded technology cannot be privatised and benefit from trade secret protections. Participants also noted that law moves slower than technological developments, and that we should push for more forward-looking laws that safeguard against new technologies being applied in legal or regulatory gaps.

These conversations build upon the work we did last year on future-proofing our digital rights, which included an essay series that you can read here and a workshop we held in Berlin last September. It was exciting to bring this topic to a new audience, and it brought a number of interesting perspectives to the fore. If you would like to join in the conversation or write a guest blog about how we can future-proof our digital rights, get in touch! We would love to hear from you.

A digital future for everyone

By Nani Jansen Reventlow, 26th October 2019

On 26 October 2019, DFF Director Nani Jansen Reventlow delivered the lecture “An inclusive digital age” at the Brainwash Festival. This is a transcript of this talk.

Our digitised lives and the reproduction of power structures through technology

Our lives are increasingly digitised. We use technology to handle our finances: we pay for groceries using an electronic wallet, split the bill in a café or bar by beaming money to our friends and manage our bills via apps and other online solutions. We find a romantic partner by swiping left or right on an app, and increasingly access essential services such as healthcare, public transport and insurance with the help of technology. The line between what is “offline” and online –– to the extent the distinction ever really existed ­­–– is blurring to the point of invisibility.

While technology holds much promise to help us change things for the better by offering increased efficiency, precision, and good old-fashioned convenience, it also has the potential to exacerbate what is wrong in our societies. Technology can reproduce and even amplify the power structures present in our society, most notably when it comes to issues of ethnicity, gender, and ability. It also has the potential to amplify social and economic inequality, as we’ll consider in a moment. As technological developments are rapidly succeeding each other and the use of technology in our lives increases at high speed, this is something we need to examine carefully and respond to, before we reach a point of no return.

What kind of power structures should we think about? And how could they be reproduced by technology?

One example is the way Google search engine results reinforce racism. Professor Safiya Noble has written extensively about the way Google image searches for terms like “woman” or “girl” produce images that are for the most part thin, able-bodied and white. Here, as a preview to what we’ll talk about in a moment when we look at the cause of these problems, it is good to keep in mind that Google in 2018, the year Noble’s book “Algorithms of Oppression” was published, had a workforce that included only 1.2 percent women.

Another example is how CAPTCHAs – short for “completely automated public Turing test to tell computers and humans apart” – the tests you get on websites after ticking the “I am not a robot” box are making the internet increasingly inaccessible for disabled users. Artificial Intelligence learns from the way internet users solve these tests, which often have you select all images with traffic lights and storefronts, or make you type out a nonsensical set of warped letters. As the AI gets more sophisticated, the CAPTCHAs need to stay ahead of clever bots that can read text as well as humans can, which brings a huge disadvantage to those without perfect sight or hearing and who rely on exactly that type of technology to make the internet accessible to them, for example by using software that converts text into audio.

The list of examples is long –– and there is a talk by Caroline Criado Perez at this festival highlighting how design, including technological design, harms women specifically –– and at this stage the list covers just about every area of our lives where technology is employed without proper consideration.

At the organisation I founded, the Digital Freedom Fund, we say that “digital rights are human rights.” With this framing, we try to underline that the full spectrum of our human rights can be engaged in the digital sphere and that they need to be protected and safeguarded in that context. This means that when we talk about technology and human rights, we are talking about more than privacy and free speech alone. Other civil and political rights are just as important and relevant when looking at human rights and tech, such as the right to a fair trial and the right to free association. But also economic, social and cultural rights are impacted by technology, such as the right of access to healthcare and the right to work.

The use of technology, human rights and discrimination: facial recognition as an example

So how does the use of technology impact our human rights and how can it enable discrimination? Let’s take a closer look at these questions, using the issue of facial recognition as an example.

Facial recognition is a technology that scans our faces, produces a biometric “map” of our features, and matches those maps against a database of people. It is used by both private and public bodies to verify people’s identities, identify people who need to be put under stricter surveillance by security services, and to find wanted criminals, vulnerable individuals, or missing persons.

The collecting and processing of biometric data
obviously raises many privacy concerns. But: there are other human rights than
just the right to privacy engaged when facial recognition technology is used. I
can give you the following three examples:

The first example of a human right that is implicated when facial recognition technology is used is the presumption of innocence. Everyone has the right to be presumed innocent, until it has been proven that they were involved in a crime. When facial recognition technology is used, everyone gets monitored. Every single person who passes the camera’s eye will have their features recorded, analysed, and sometimes even stored. No distinction is made between people out for their weekend shopping and a potential criminal who is up to no good.

A second example of a human right that can be violated by the use of facial recognition is the right to be free from discrimination. Facial recognition software has proven to not work as precisely with non-white people and women, resulting in a much higher degree of false positives when running biometric data against a database then is the case for white, male subjects. This means that the software finds a match where there is none. So if the system is trying to find criminals, it will identify innocent people as criminals much more often when it concerns women and/or people of colour. The lack of precision this technology offers when it comes to non-white, non-male people, results in a negative effect for anyone who is not male and white: they run a significantly higher risk to be mistaken for someone else, including someone wanted by law enforcement for a crime.

The third and last example of a right that facial recognition technology can harm is the right to freedom of assembly and association. This includes our right to gather in the streets and express our views on issues we find important, such as climate change, women’s rights, or the way our government is dealing with migrants. Facial recognition technology is usually deployed at public protests and demonstrations, meaning that everyone participating runs the risk of being captured and ending up in a database. This can be a deterrent for attending protests in the first place, which in turn has a negative impact on our democracy.

How does technology enable unequal treatment?

An important question to ask is: how did we get here? Technology is often presented as the perfect solution to so many of our problems, but the examples we looked at so far illustrate that much of the tech that is supposed to assist us has significant problems and can actually have a negative impact on our lives by trampling on our hard-won human rights. How is this possible?

First, technology is mostly built by white men. In her book “Weapons of Math Destruction”, mathematician and author Cathy O’Neil says that “A model’s blind spots reflect the judgments and priorities of its creators.” In other words: the way technology is coded reflects the assumptions of the people designing the technology. Most of the technology we encounter in our daily lives comes from Silicon Valley in the United States, where the majority of the big tech companies are based. If you imagine that analysis of 177 Silicon Valley companies by investigative journalism website Reveal showed that ten large technology companies in Silicon Valley did not employ a single black woman in 2018, three had no black employees at all, and six did not have a single female executive, it is perhaps not so difficult to imagine that facial recognition software has difficulty working as precisely with non-white, non-males as it does with people who resemble the majority of the world’s tech industry. This idea of designer bias is illustrated when looking at facial recognition software that is created outside the Silicon Valley bubble: facial recognition software built in Asia is much better at recognising Asian faces.

Second, technology is built and trained on data that can already contain bias or discrimination. For example, the asymmetry in the United States justice system is well documented. A survey of data from the U.S. Sentencing Commission in 2017 found that when black men and white men commit the same crime, black men on average receive a sentence that is almost 20 percent longer, even when the research controlled for variables such as age and criminal history. A 2008 analysis found that black defendants with multiple prior convictions are 28 percent more likely to be charged as “habitual offenders” than white defendants with similar criminal records. Innocent black people are also 3.5 times more likely than white people to be wrongly convicted of sexual assault and 12 times more likely to be wrongly convicted of drug crimes. These are just a few examples –– the list of research results similar to this goes on and on.

If you then use those data to develop and train, say, software that helps judges to determine sentences for defendants, it is not surprising that this software will be geared towards replicating those historical data. Software based on data from an intrinsically racist criminal justice system, will provide sentencing recommendations that reflect that racism.

This brings us to a third factor, which is that of feedback loops. Systems are rewarded for reflecting and reinforcing existing datasets, which in turn reflect societal bias and a practice of discrimination. An example of how this manifests itself is performance reviews. What if an algorithm decided if you were good at your job? Unfortunately, this is not a hypothetical question: a school in Houston, the United States used software to determine how well its teachers were performing. What happens, is that teachers who receive a “bad” score get dismissed from their jobs. This information makes the system assume that it got its assessment right, which entrenches the values and objectives that led to the original “bad” classification. This further strengthens the system into making similar decisions in the future.

Fourth, in spite of the issues we just looked at, there is an assumption that technology is “neutral”: the computer is considered as smart, so it must be right. This faith in technology and its flawlessness means that there is little to no second-guessing when the “computer says no”. Or when the sentencing software recommends a stiff prison sentence for a black first-time offender of a small misdemeanor. A lack of awareness of the fallibility of technology leads us to over-rely on it, rather than seeing it as one of the many tools at our disposal that we can use to reach a balanced outcome.

Another factor is the wish of engineers to see technology as something neutral, something that is stand-alone and not part of the political and social systems and structures it operates in. Bill Gates famously said that “technology is just a tool”, emphasising that it’s mostly what we do with it that matters. This is too narrow a vision. Rather than a lack of awareness, I would refer to this as a responsibility deficit: the technology we design and how we design it has real life implications so insisting anything else is no less than an act of willful ignorance and defiance. We’ll look at some of the options for addressing this responsibility vacuum in a moment.

Finally, how and where we deploy technology is a choice. And this choice is often made to the detriment of those groups in our society that are the most vulnerable. In her book “Automating Inequality”, which examines how the use of automated decision-making in social service programs creates a “digital poorhouse”, Virginia Eubanks makes a compelling argument that technology is usually rolled out against the most vulnerable groups in society before being applied to the public at large. The United Nations Special Rapporteur on extreme poverty and human rights, Philip Alston, has similarly observed that the poor are often the “testing ground” for the government’s introduction of new technologies. What we have here is a double bind: the decision who to deploy the technology towards in and of itself is discriminatory and this discrimination then gets exacerbated by the use of technology with the types of flaws we’ve just looked at.

Fighting for our digital rights and challenging discrimination: what can be done?

Are we lost? Should we give up as a society and leave it to the robots to take over? No, not just yet. There are a number of things that can be done on an industry, state and individual level to push back and challenge the negative impacts that technology can have on the goal of having an equal, equitable society.

To start with industry: a number of the problems flagged above lie squarely in the hands of those designing our technology to improve: designer bias, the uncritical use of flawed data sets, and a failure to engage with the connection between tech and our societies. To create a better design climate, industry needs to reflect on not just their hiring practices –– i.e. not hiring mostly white men –– but also change their organisational culture and team composition at a more structural level. For example, design teams could include experts from social sciences and human rights experts instead of just engineers.

This requires a fundamental shift in how the industry
operates and a principled departure from the adagio “move fast and break
things”, the classic startup motto that has remained alive and well in big tech
over the past decades. Stringent and actually implemented ethics frameworks
should prevent companies from building evil products: an audit for potential
human rights risks should take place throughout the development process and mandate
correction for negative human rights impacts.

Unfortunately, the current trend is that responses to
any negative human rights effects are only addressed after a company
comes under fire for a piece of bad technology.  

One example is that of HireVue, which offers software solutions for recruitment and allows big companies such as Nike, Unilever, and Goldman Sachs to conduct a first round of job interviews without any human involved: applicants answer questions on camera and HireVue’s software analyses and rates candidates based on what people say, how they say it, and their body language. A reporter who analysed the software, however, discovered that the system’s rating reflected previous hiring decisions of managers, which have been proven to generally favour white, male candidates. Following a critical piece in Business Insider about this issue, HireVue published a statement explaining how it “carefully tests for potential bias against specific groups before, during, and after the development of a model”.

Companies should do this right from the beginning and not only when responding to a scandal; this is not regular practice at the moment. Though there are some positive examples as well, such as Amazon scrapping the use of a recruitment algorithm that turned out to be down-ranking women applicants while the programme was still in its pilot phase.

Of course there is an important role to play for governments as well, which brings us to a second course of action. No country should allow anyone to build or use evil systems or to use bad technology themselves. Most countries have a decent framework to decide what type of work and equipment they will buy from industry and what not: a set of procurement rules and similar laws is generally present in a country that appreciates the rule of law. A starting point is making sure these existing laws and standards are applied in the context of technology as well. Transparency and procurement standards exist and they should be adhered to whether government is looking to build a new parking lot or buy software to assist the administration of social welfare.

In a perfect world, Human Rights Impact Assessments –– an assessment to determine if and how the use of a certain piece of technology would have an effect on human rights ­–– would be conducted for anything the State buys or uses. This, in turn, would also help compel companies to implement the practice. We are not quite there yet, but some positive signs can be seen across the globe. For example, a number of cities in the United States – San Francisco, Oakland and Somerville – have banned the use of facial recognition software, citing concerns of having their cities turn into a “police state”. The San Francisco ordinance imposing the ban also stipulates that all new surveillance equipment should be approved by city leaders, “to verify that mandated civil rights and civil liberties safeguards have been strictly adhered to.”

Third, where we know that our rights are being violated, we need to seek justice, through the courts or political action. The courts are important guarantors of our rights and freedoms, and that does not change with new advances in technology. In fact, one of the most important things, as technology advances, is to bring cases that help courts understand the implications for human rights. This means involving courts and judges in the debate and not leaving them behind.

We have already seen the courts take steps to safeguard people against the potential harms that can flow from biased and discriminatory technologies. In the case of State v. Loomis, the Supreme Court of Wisconsin said that a system called COMPAS, which gave assessment scores for recidivism, so the rate at which offenders commit another crime, should be accompanied by disclaimers. Whether or not a defendant is likely to commit another crime can be an important factor for judges in determining what type of sentence to impose and for what duration. The algorithmic COMPAS system seemed like a wonderful aide in determining this likelihood, making such assessments more accurate and the judges’ lives easier. However, as analysis by the investigative journalism website ProPublica demonstrated, COMPAS flagged black defendants almost twice as likely to be at higher risk of committing another crime, while they would not go on to do so. On the other hand, white defendants were much more likely than black defendants to be labelled as lower risk, but would actually go on to commit other crimes.

Mr Loomis, who had been charged with driving a stolen
vehicle and fleeing from police, challenged the use of the COMPAS score at his
sentencing, arguing this was a violation of his due process rights. While his
case did not lead to an all-out dismissal of the use of the system, it did send
a clear signal that the technology should not be taken at face value and that
it had limitations.

Last, but definitely not least, you and I, all of us, can fight back when our human rights are being violated. We need to demand transparency on when and how technology is being used in contexts where it can have an impact on our human rights. Only if we are able to know when which type of technology is used and how in the many processes that deeply impact our lives –– healthcare, finances, employment, law enforcement: you name it –– can we challenge unfairness that results from it. A lot of this has to do with what we just discussed on transparency and procurement standards, but we should never forget that we, as citizens, as individuals, have leverage here, too, and we need to make our voice heard and clearly state our demands. We need to spur our elected representatives into action and, if need be, take our case to court and fight it to the highest level.

We are only at
the beginning of what by all means should and must be the next frontier of our
fight for equal rights, and for equal and equitable treatment before the law in
all aspects of our lives.