Making Digital Technologies Accessible to All

By Alejandro Moledo, 3rd December 2020

Colourful laptop opening against black background

On 3 December, we celebrate the International Day of Persons with Disabilities, a day on which the disability movement comes together to call for the realisation of the rights of over 15% of the population.

This call is particularly necessary this year, as persons with disabilities have been one of the groups most affected by the consequences of the COVID-19 pandemic. As for non-disabled people, the use of information and communication technologies (ICT) helps in coping with this situation. The difference is that for persons with disabilities, many tech products and services are not yet accessible to us.

The UN Convention on the Rights of Persons with Disabilities, ratified by 182 countries including the EU and all its Member States, is the first international human rights treaty recognising access to ICT as a fundamental right to be ensured to persons with disabilities. Why is that necessary? We are early adopters of technologies because, when these are available, affordable and accessible to us, they are a gateway to social participation and independent living, employment, education, culture and leisure.

We are early adopters of technologies because, when these are available, affordable and accessible to us, they are a gateway

With accessible technologies we can indeed overcome some of the existing barriers we encounter in our everyday life. This is why we, the disability movement, ask for accessibility to be considered as one of the core aspects in the digital domain, such as privacy, data protection or security.

Accessibility refers to the extent to which ICT products and services are designed to be used by the widest range of users, regardless of their needs, characteristics or capabilities. In short, incorporating accessibility embraces human diversity, and rejects concepts such as designing for the “average user” or “one-size-fits-all”.

Accessibility is essential for persons with disabilities, but it’s beneficial to all. It increases usability, personalisation, adaptability to different contexts of use, wider compatibility, in the case of the web – better search engine optimisation, faster loading time, among others.

For example, having subtitles help to follow a video in noisy places, possibility to enlarge font-size or increase contrast are useful in very lighting environments (or when you forgot your glasses); if your mouse is broken, keyboard navigation is the only way; voice commands or text-to-speech facilitate multitasking. You can watch a series of very short videos about all these benefits at Web Accessibility Perspectives of W3C-WAI.

Accessibility is essential for persons with disabilities, but it’s beneficial to all

For over a decade, the European Disability Forum has been advocating at the EU to set up a legal framework, similar to that of the US (in this specific case, they are the best practice), to guarantee that accessibility is required in the main technologies people use in their everyday life. Thus, in 2016 the EU adopted the Web Accessibility Directive, covering websites and mobile apps of the public sector.

In 2018 progress on accessibility became mandatory in the Audiovisual Media Services Directive for TV channels and video on demand platforms. In the same year, equal access and choice was strengthened in the Electronic Communications Code, and, finally, in 2019, the EU adopted, after years of campaigning, the European Accessibility Act. To support all these laws, the EU has the most advanced and comprehensive technical standard on accessibility for ICT products and services.

Moreover, the European Accessibility Act is a Directive with a very strong ICT component. It covers computers and operating systems, tablets, smartphones, and smart TVs, self-service terminals (such as ATMs, ticketing machines, or payment terminals), e-books and e-readers, telephony and emergency communication (sadly and surprisingly, in 2021, there are still countries which consider fax an “accessible way” for persons with disabilities to call the 112 emergency number), e-banking and e-commerce services. In just a few years, therefore, the digital sector will consider accessibility as a legal prerequisite in order to place their products and services within the EU single market.

In just a few years, therefore, the digital sector will consider accessibility as a legal prerequisite

Not so many years ago, persons with disabilities had to buy separate screen reader software and set it up in their devices. Nowadays, this feature comes built in the device. And even though some persons with disabilities will still need to make use of certain assistive technologies (those specifically designed for us), accessible mainstream technologies must continue expanding its usability by a broader range of users.

We indeed witness many potential opportunities in emerging technologies, such as artificial intelligence, reality technologies, robotics or smart environments, however we do also see as well the risk of further discrimination and exclusion. Our publication Plug and Pray? explains it all.

Therefore, in order to succeed in ensuring that technology does serve our diverse societies, disability advocates and digital rights advocates must join forces in defending and promoting human rights, including accessibility, in the digital field.

Alejandro Moledo is Policy Coordinator at the European Disability Forum.

Photo by Lenin Estrada from Pexels

Supporting Long-Term Impact: Announcing Changes to DFF’s Grantmaking

By Nani Jansen Reventlow, 29th November 2020

DFF is coming to the end of its first 3-year strategy cycle, what we informally have referred to as the “pilot phase”. The end of our pilot phase also means the closure of our current grantmaking process.

DFF will not accept any further grant applications in 2020 as we are working to finalise a new, revised process, which we are looking forward to launching in early 2021.

DFF has changed significantly since we were first introduced to the world in October 2017. Based on input from the digital rights field, we established a strategy and priorities for our funding, we launched a grantmaking process, coordinated annual strategy meetings, facilitated strategic litigation retreats, and held field building workshops on topics like the GDPR, algorithm use, and competition law.

We were very happy that a recent external evaluation concluded these activities are seen as adding important value to the digital rights field.

While we were able to take important first steps in supporting the digital rights field during the pilot phase, there are areas where we can do more to make sure we are best serving its needs. One area of particular importance is our grantmaking.

The current grantmaking process was launched in July 2018, after being developed in dialogue with the digital rights field. By the end of 2020, DFF will have approved more than 40 grants, worth a total of over €1.5 million, supporting the litigation and pre-litigation research projects of 30 different organisations and individuals across Europe. Many of these projects are detailed on our case study page and in our annual report, with more to be published over the coming months.

By the end of 2020, DFF will have approved more than 40 grants, worth a total of over €1.5 million

Building on the participatory approach taken in developing the initial grantmaking process, DFF has continued to revise its grantmaking throughout the pilot phase. We actively sought feedback through regular surveys, outreach, and conversations with applicants and grantees. In response to questions about the scope of our grants or what we expected to see in applications, we published guides and a frequently asked questions page to help organisations prepare their applications.

When we received feedback that the application process took too long, we developed a “fast-track” application process for grantees moving from one grant to another. We have also sought to add value in other ways: for example, in 2019, we developed a new framework to better capture the outcomes and impacts of strategic litigation.

We are proud of the many great projects we have been able to support over the last three years, and hope that the efforts we made to improve our processes during that time managed to address at least a good part of the needs of our grantees. But: there are some major limitations we can only overcome by changing the scope of the grantmaking process itself.

There are some major limitations we can only overcome by changing the scope of the grantmaking process itself

For example, many organisations have requested support for adverse costs – costs to cover possible court orders to pay the fees of the opposing party in the case of a loss – noting these costs are a major barrier to taking public interest litigation. While recognising that this was a welcome type of support, we also wanted to do justice to the complexity of the issue. How do you help litigators absorb the negative impact an adverse cost order might have on their operations, but without disincentivising courts to take the public interest into account and without, through funding support, incentivising frivolous litigation – the very thing cost orders are supposed to discourage?

We also wanted to make sure we adopted an approach that would be equitable across all of the geographical region DFF serves, and not just certain jurisdictions. Following detailed research and consultations, from 2021, applicants will be able to include adverse costs in their grant applications under certain conditions. Our new policy seeks to maintain a balance between helping to mitigate the impact of cost orders on digital rights litigators, while making sure DFF doesn’t unintentionally encourage the practice itself.

…from 2021, applicants will be able to include adverse costs in their grant applications under certain conditions

The biggest limitation of DFF grantmaking during the pilot phase was our inability to support long-term projects over multiple litigation instances. DFF was established to provide grants supporting strategic litigation, i.e. litigation that has an impact beyond the parties involved in the case and that leads to legal, policy or social change. However, this kind of impact often takes more than one instance of litigation to achieve. While providing this type of support was envisaged from the very early stages of DFF’s development, DFF being a young organisation made this impossible to implement from the start.

We are working to change this in 2021, when we hope to launch a new grantmaking process allowing organisations to apply for grants that support multiple instances of litigation. We are in final discussions to ensure the process carefully balances the demands of the field with operational constraints, and gets support from our funders and the DFF Board.

…we hope to launch a new grantmaking process allowing organisations to apply for grants that support multiple instances of litigation

The new process will also build on the lessons learned from the COVID-19 Litigation Fund, which was our first time providing grants of this nature. We hope the new process will allow applicants to more effectively plan their cases over a long period, with the confidence that they will be able to see a case through to the highest level necessary, and also provide an incentive to invest in building long-term strategies, coalitions and campaigns with other partners.

The end of the current grantmaking process will not affect current grantees, whose projects will carry on until they are completed. DFF will continue to move the small number of remaining active applications through our current process, with final decisions to be made before the end of 2020.

Look out for more details from us later in the year and please get in touch if you have questions or comments in the meantime –– as always, we welcome your feedback and input. We look forward to supporting more strategic litigation efforts to advance digital rights across Europe in 2021 and beyond.  

Decolonising Digital Rights: Why It Matters and Where Do We Start?

By Nani Jansen Reventlow, 23rd October 2020

This speech was given by DFF director, Nani Jansen Reventlow, on 9 October as the keynote for the 2020 Anthropology + Technology Conference.

The power structures underlying centuries of exploitation by one group of another are still here.

Besides the fact that we, in reality, still have over 60 colonised territories around the world today, maintained by 8 countries (though the UN General Assembly would disagree with that number), colonisation has taken on many different forms, including in and through technology.

What does this mean for our societies? What would things look like if they were different? How do we get there –– or: how do you decolonise society? How do you decolonise technology? And how do you decolonise digital rights?

I will start this talk with a spoiler: I will not be able to provide you with an answer to these fundamental questions. What I will try to do in the next half hour, is tell you something about the problems we at the Digital Freedom Fund are seeing in Europe when it comes to digital rights and what is often euphemistically referred to as “diversity and inclusion“.

I will also tell you about what we are doing to try to set in motion a process to fundamentally change the power structures in the field that works on protecting our human rights in the digital context.

But before we get there, we first need to take a look at what the problem is and why we have it.

Sketchnote by Gunjan Singh

What’s the problem?

Today’s conference is centred around “championing socially responsible AI,” but: what does this mean? From a human rights perspective, many AI-related digital rights conversations tend to focus on the right to privacy and data protection. In doing so, these conversations often miss the full extent of the social impact new technologies can have on human rights. This is one of the reasons why it is so important that we decolonise the digital rights space and encourage an intersectional approach to AI and human rights issues.

Let me illustrate some of the issues in the context of the themes of this conference, starting with:

Health tech

The implications of health tech on individuals is a prominent conversation at the moment as we are facing the COVID-19 pandemic.

Big Data solutionism is pervading coronavirus responses across the globe, with contact tracing, symptom checking, and immunity passport apps being rolled out at rapid speed.

These technologies are often not properly tried and tested, and it is clear that privacy and data protection often has not been front and centre for those developing the technology, let alone other human rights considerations. They also illustrate the degree to which our technology and the way we deploy it is colonised.

The UK, for example, launched a “Test and Trace” system in England and Wales in May this year, without, it later admitted, having properly conducted a data protection impact assessment. This admission came after the NGO Open Rights Group had threatened legal action. The heavily criticised app was abandoned and, since late September, a new one is available, which addresses some of the privacy concerns previously raised.

But: is privacy the only issue we should be looking at when considering the viability of using tracing apps to combat a public health emergency? Limiting our analysis to privacy and data protection alone results in blind spots on many of the broader issues at play, such as discrimination and access to healthcare. A few examples. 

To be able to download and use the app, you need to have a relatively new phone, with the right operating system installed. This means that those who are unable to afford or don’t have direct access to technology are excluded.

There also is an assumption that each user would be uniquely linked to a phone. And of course, in order to download the app and receive warning and notifications, you will have to be online.

To put it simply: the effectiveness of the app is based on an assumption that the “average person” in society is the exclusive owner of a new smartphone with reliable access to the internet. Researchers at Oxford University have estimated that more than half the population of a country would need to make use of a tracing app in order for it to be effective.

This raises the question: what happens to the “less than half” of the rest of the population that does not or cannot make use of the app, and how does the automation of disease control affect their vulnerability? Should we want to use an app at all if it is not effective for the protection of everyone in our society?

This raises the question: what happens to the “less than half” of the rest of the population that does not or cannot make use of the app, and how does the automation of disease control affect their vulnerability?

Concerns around health data, however, are not new to the COVID-19 context. The UN Special Rapporteur on privacy has recognised that medical data is of “high value” for purposes such as social security, labour, and business. This means that stakeholders such as insurance companies and employers have a considerable interest in health-related data.

Many health services are built on the values of trust and confidentiality. But as more and more actors move into the health data space, these values are fading more and more out of focus.

A failure to protect health data can engage the rights to life, social protection, healthcare, work, and non-discrimination. Very concretely, it may deter individuals from seeking diagnosis or treatment, which in turn undermines efforts to prevent the spread of, say, a pandemic.

The access Home Office immigration officials are given to check entitlement to health services as part of the UK government’s “hostile environment” policy is a clear example of this, but even in “lower risk” settings, a person might think twice about getting medical help if the potential repercussions of sensitive health information ending up in unwanted hands are sufficiently grave.

FinTech

It is said that health cannot be bought, but with that wisdom in mind let us turn to the second conference theme, which is FinTech.

Here too, with the ever-increasing automation of financial services, it is not only the right to privacy that is under threat.

Access to financial services, such as banking and lending, can be a decisive factor in an individual’s ability to pursue their economic and social well-being. Access to credit helps marginalised communities exercise their economic, social, and cultural rights.

Access to financial services, such as banking and lending, can be a decisive factor in an individual’s ability to pursue their economic and social well-being

Muhammad Yunus, social entrepreneur and Nobel Peace Prize winner, has gone so far as saying that access to credit should be a human right in and of itself. “A homeless person should have the same right as a rich person to go to a bank and ask for a loan depending on what case he presents.”

In reality, automation in the financial sector often polices, discriminates, and excludes, thereby threatening the rights to non-discrimination, association, assembly, and expression; individuals may not want to associate with certain groups or express themselves in certain ways for fear of how it will impact their creditworthiness.

This policing, discrimination, and exclusion can also have an impact on the right to work, to an adequate standard of living, and the right to education.

Cathy O’Neil has noted that creditworthiness has become an “all-too-easy” stand in for other virtues. It is not just used as a proxy for responsibility and “smart decisions”, it is also a proxy for wealth. Wealth, in turn, is highly correlated with race.

While the FinTech narrative is that it works with “unbiased” scoring algorithms that are blind to characteristics such as gender, class, and ethnicity, research shows a different picture.

While the FinTech narrative is that it works with “unbiased” scoring algorithms that are blind to characteristics such as gender, class, and ethnicity, research shows a different picture

Many of these “modern” algorithms make their decisions based on historic data and decision patterns, which has led to the coining of the term “weblining,” to show how existing discriminatory practices that were operationalised in the US in the 1930s with the practice of “redlining” to keep African American families from moving into white neighbourhoods, are now replicated in new technology.

Those who can afford to, can hire consultants or go live in certain neighbourhoods to boost their credit scores. In the meantime, those living in poverty are refused loans, often on a discriminatory basis, and even targeted because of their credit scores for pay-day-loans and other online advertisements that can plunge them further into poverty.

This illustrates that this is more than a question of privacy: it is a question of livelihood.

It doesn’t stop there. Credit scores are even used to make decisions about a person outside of the financial services sector, such as hiring or promoting individuals at work. Conversely, other proxies are increasingly being used as stand ins for creditworthiness. O’Neil has explained how this has contributed to a “dangerous poverty cycle”.

With datasets recording nearly every aspect of our lives, and these data points being relied on to make decisions on us as employees, consumers, and clients, we are being labelled as targets or dispensables. Our occupations, preoccupations, salaries, property values, and purchase histories result in us being labelled as “lazy,” “worthless,” “unreliable”, or a risk, and that label can carry across many different aspects of our lives. 

Smart cities

I will be brief about smart cities because I actually don’t think the question of making cities “smarter” through technology is worth having unless we are talking about the need to completely reinvent the urban design that has historically and systemically harmed people of colour, people living in poverty, people with disabilities, and marginalised groups.

Smart city design runs the risk of automating and embedding assumptions on how we run and manage cities that are less about people, and more about profit and exclusion, as well as vague, unsubstantiated notions of “efficiency”.

Smart city design runs the risk of automating and embedding assumptions on how we run and manage cities that are less about people, and more about profit and exclusion

Smart city initiatives in India have led to mass forced evictions, and in the United States the question should be asked if smart city design to “reduce neighbourhood crime” isn’t a euphemism for increased surveillance, with all the racialised policing that comes with it.

The really interesting question here is how we can use AI to help fix these inequalities and harms, but that is not usually the focus of many of the current smart city debates.

Why do we have this problem?

We just looked at some of the manifestations of the problem of colonised technology. So: what causes us to have this problem in the first place?

First, there is a common and incorrect assumption that technology is neutral. However, the apps, algorithms, and services we design ingrain choices made by its creators. It replicates their preferences, their perceptions of what the “average user” is like, what this average user would want or should want to do with the technology. Their design choices are based on the designer’s world view and therefore also mirrors it.

As someone said to me the other day: “an algorithm is just an opinion in code.” When those designers are predominantly male, privileged, able-bodied, cis-gender, and white, and their views and opinions are being encoded, this poses serious problems for the rest of us.

This relates to the second cause, which is that Silicon Valley has a notorious brogrammer problem. When you look at any graph reflecting the makeup of Silicon Valley, where most of our technology here in Europe comes from –– and this is a problem in and of itself, as technology developed from a white, Western perspective is deployed around the world  –  this is easily visible.

For professions such as analysts, designers and engineers the numbers for Asian, Latina, and Black women decrease as role seniority increases

For professions such as analysts, designers and engineers the numbers for Asian, Latina, and Black women decrease as role seniority increases. Often to the point they literally become invisible on the graph because their numbers are so small. 

Analysis of 177 Silicon Valley companies by investigative journalism website Reveal showed that ten large technology companies in Silicon Valley did not employ a single black woman in 2018, three had no black employees at all, and six did not have a single female executive.

This should make it less surprising that, for example facial recognition software built by these companies is predominantly good at recognising white, male faces.

However, a predominantly male, white, and able-bodied workforce is not the only thing that factors into technological discrimination.

A third cause, which we already touched upon in the context of FinTech, is that technology is built and trained on data that can already reflect systemic bias or discrimination. If you then use those data to develop and train new software, it is not surprising that this software will be geared towards replicating those historical data. Technology based on data from a racist, sexist, classist, and ableist system, will provide outcomes that reflect that racism, sexism, classism, and ableism. Unless a conscious effort is made to get the system to make different choices, systems built on such data will replicate the historical preferences it has been fed.

Finally, we don’t work consistently with interdisciplinary design teams. Engineers will build technology to certain specifications, and those will have systemic biases baked into them; you can have all the non-male, non-white, non-ableist engineers you can think of, but it will not be up to them (alone) to solve the systemic problems.

If society is sexist, racist, and ableist, so will the AI it develops be. It is unhelpful to focus on the technology alone as it negates the political and societal systems in which it is developed and operates.

There is a role for social scientists and others at crucial stages of the design and decision-making process; developing suitable AI is not just a task for engineers and programmers

An interdisciplinary approach in developing AI systems is therefore crucial. There is a role for social scientists and others at crucial stages of the design and decision-making process; developing suitable AI is not just a task for engineers and programmers.

What can we do about it?

We obviously have a large-scale problem on our hands and the monster we created won’t easily be put back in its cage. There are, however, a number of things we can do, both in the shorter and longer term.

  • Push for moratoriums on new technologies until we understand their social impact, particularly on human rights. The call for a ban on the use of facial recognition technology has gained more traction following the Black Lives Matter protests, with several tech giants putting a hold on –– part of –– their products in this area. This should be a guiding principle: unless we know and understand in full what the human rights impact of new technology is, it should not be developed or used.

  • We also need to have the debate on where to draw the so-called red lines on where AI technology should not be used at all. This conversation needs to be people-centred; the individuals and communities whose rights are most likely to be violated by AI are those whose perspectives are most needed to make sure the red lines around AI are drawn in the right places.

  • When we analyse the impact of new technologies, this needs to be done from an intersectional perspective, not only on how it affects privacy and data protection. As we saw earlier, breaches of privacy are often at the root of other human rights violations. Companies and governments need to be held accountable for these violations and they need to work with multidisciplinary teams that include social scientists, academics, activists, campaigners, technologists, and others to prevent violations from occurring in the first place. This work also needs to be done closely and consistently with affected groups and individuals to understand the full extent of the impact of AI-driven technologies.

  • We need to push for enforcement and compliance with existing legislation protecting human rights. There often is a call for new regulation, but this seems to conveniently forget that we actually have existing international and national frameworks that set clear standards on how our human rights should be respected, protected, and fulfilled. This is also a healthy antidote to the fuzzy “ethics” debate companies would like us to have instead of focusing on how their practices can be made to adhere to human rights standards.

  • Finally, we need to not only decolonise the tech industry: we also need to decolonise the digital rights field. The individuals and institutions working to protect our human rights in the digital context clearly do not reflect the composition of our societies. This leaves us with a watchdog that has too many blind spots to properly serve its function for all the communities it is supposed to look out for.

At the outset, I referred to “diversity and inclusion” as a euphemism. I did that, because that is not enough to make the change that needs to happen.

Instead of focusing on token representation, we need to change the field on a structural level

Instead of focusing on token representation, which essentially treats the current status of the field as a pipeline problem, we need to change the field on a structural level; we need to change its systems and its power structures.

This is something that is fundamentally different from “including” those with disabilities, from   racialised groups, the LGBTQI+ community, and other marginalised groups in the existing, flawed ecosystem.

Here we return to the big question raised at the beginning of this talk: how do you decolonise a field? The task of re-imagining and rebuilding the digital rights field is clearly enormous. Especially since digital rights cover the scope of all human rights and therefore permeate all aspects of society, the field does not exist in isolation.

Here we return to the big question raised at the beginning of this talk: how do you decolonise a field?

We can therefore also not solve any of these issues in isolation either –– there are many moving parts, many of which will be beyond our reach to tackle, even if we at the Digital Freedom Fund are working with a wonderful partner in this, European Digital Rights (also known as EDRi).

But: we need to start somewhere, and we need to get the process started with urgency as technological developments will continue at a rapid pace and we need a proper watchdog to fight for our rights in the process.

So: what have we done so far.

We started earlier this year with a process of listening and learning. Over the past months, we have had conversations with over 30 individuals and organisations that we are currently not seeing in the room for conversations about digital rights in Europe.

Over the past months, we have had conversations with over 30 individuals and organisations that we are currently not seeing in the room for conversations about digital rights in Europe

We asked about their experience working on digital rights, what their experience has been working with digital rights organisations, and what a decolonised digital rights field might look like and what it might achieve.

We also started collecting and reading literature about decolonising technology and other fields, to start developing a possible joint vision we can work towards.

As these conversations continue, we are starting similar conversations with the digital rights field to learn what their experience has been working on racial and social justice issues, working with partners in this field, and also what their vision of a decolonised digital rights field might look like.

The next step is an online meeting in December of this year to (a) connect different stakeholders and (b) receive input on what the next step in the process, which we are referring to as a design phase, should look like. This design phase, which we hope to start next year, should be a multi-stakeholder effort to come to a proposal for a multi-year, robust programme to initiate a structural decolonising process for the field.

Much has changed since we first started talking about the need to decolonise the digital rights sector two years ago.

This is on the one hand encouraging and on the other hand the threat of “decoloniality” becoming yet another buzzword that people like to use but not practice looms large

The recent international Black Lives Matter protests have done a lot to boost awareness about systemic racism. This is on the one hand encouraging and on the other hand the threat of “decoloniality” becoming yet another buzzword that people like to use but not practice looms large.

The irony of our work now being of interest to many –– the media, policymakers, funders… –– now that it has been validated because the “white gaze” became captivated by racial justice protests amidst the boredom of a global lockdown, is also not lost on me.

That being said, the current mood does illustrate how necessary this work is, not only in the digital rights space, but everywhere in our society. And the more of these processes we can set in motion, the better a world we will be creating for all of us.