As We Fight Climate Change, Let’s Not Forget Digital Rights

By Nani Jansen Reventlow, 4th June 2020

It is a sad reality that World Environment Day, which takes place this Friday, June 5th, can hardly be considered a joyous celebration of the world around us. Rather, the day serves as yet another much-needed call to action to tackle pressing environmental problems.

As the climate catastrophe looms, environmental issues will inevitably overlap with digital rights on multiple planes. Climate change has grave implications for human rights, and many of those rights relate to privacy, surveillance, AI, data, and the free flow of information – all of which fall under the umbrella of digital rights.

Nobody can deny that we are in dire need of innovative solutions, including technological ones, to tackle this unprecedented crisis. But at the same time, we must ensure that new technologies respect our human rights.

The COVID-19 pandemic has thrown into sharp relief the tendency to let respect for human rights slide in emergency circumstances. But such a situation is not sustainable and can have grave consequences. That’s why it is absolutely crucial that we don’t sacrifice people’s fundamental rights and liberties as the fight against climate change intensifies.

The COVID-19 pandemic has thrown into sharp relief the tendency to let respect for human rights slide in emergency circumstances

Surveillance

It’s typical in this day and age for governments to respond to crises by rolling out measures that increase surveillance and monitoring of populations. Data can certainly help in assessing situations more accurately, but it also increases government control and threatens people’s privacy and freedoms.

As time ticks on, and climate change renders more areas unliveable, or induces conflict, more and more people are being forced to flee their home countries. Asylum seekers and refugees are often at particular risk when it comes to their digital rights: states often deploy measures, such as biometric data collection and geo-tracking, in order to identify them, track their movements, and conduct “security checks”. As a vulnerable group often without a state willing to defend their interests, asylum seekers are particularly at risk as climate change worsens.

Surveilling environmental activists or hacking their devices is a common way to keep tabs on them or suppress their work

Another group at high risk of digital rights breaches are environmental activists. Defending the environment has become one of the most dangerous jobs out there: many are subjects of harassment and violence, and many have even been killed. Surveilling environmental activists or hacking their devices is a common way to keep tabs on them or suppress their work, which often runs counter to powerful interests. Activists must not only be protected, but encouraged and supported to continue their invaluable work.

Connectivity

One of the core UN sustainable development goals is achieving open and secure internet connectivity for everyone. These days, access to the internet is a crucial equality issue: it’s how we communicate and disseminate information. Having certain pockets of the world locked out from that is therefore a growing human rights issue. Internet shutdowns, for example, tend to signal human rights violations.

The thing is, it’s not just about having access to the internet: it’s also about having access to an internet that supports the free flow of information. When certain information is censored, filtered or blocked, people’s rights are violated, and inequality is exacerbated. That is why human rights must be built into initiatives to roll out internet connectivity, and that schemes carrying out this important job are subject to the requisite human rights assessments.

Digital Design

While many digital technologies are built with the explicit goal of improving sustainability, technology is not universally sustainable: far from it, in fact, with many digital tools themselves being significant drivers of climate change. Mining bitcoin, for example, generates enormous amounts of carbon dioxide, while data centres gobble electricity.

Mining bitcoin, for example, generates enormous amounts of carbon dioxide, while data centres gobble electricity

Our devices, from smartphones to laptops, consume vast amounts of energy. With tech giants offering us shiny new products each year, we’re also consuming greater numbers of electronics and disposing of them, leading to a lot of unnecessary “e-waste”. This isn’t always the consumer’s fault, of course – nowadays, many devices are built to break.

As well as this, new tools that seek to increase sustainability may threaten digital rights in other aspects. Precision agriculture, for example, is being hailed as a sustainable saviour, but it uses artificial intelligence, which can pose challenges for digital rights, due to lack of transparency or bias. 

This World Environment Day, we will hopefully be able to make advances and consider new and cutting-edge solutions to the climate crisis. But as we move forward in the process and put the environment to the top of our priorities, let’s not overlook digital rights. By considering both, we can work towards a more sustainable and equitable world.

Photo by NASA on Unsplash

Explainer: What is the “Digital Welfare State”?

By Jonathan McCully, 27th April 2020

Lines of code

Over the past decade, governments across the globe have adopted strategies to transform public services through technology.

One priority area for these strategies has been the “welfare state,” with digital tools being rolled out in this sector with minimal public debate or accountability.

These technologies, many of which are powered by machine learning and automated decision-making systems, turn the needs of the most vulnerable in our society into numbers and variables. Poverty and vulnerability are reduced to problems that can be “solved” or “fixed” by technological innovation.

From the implementation of these strategies has emerged the “digital welfare state”.

At the Digital Freedom Fund, we have been exploring how strategic litigation could be engaged to prevent, remedy and safeguard against human rights violations caused by the “digital welfare state”. This year, DFF will work towards co-developing a litigation strategy on the topic in consultation with relevant organisations and stakeholders. If you would be interested in helping us with this work, we would very much welcome your input and involvement.

Poverty and vulnerability are reduced to problems that can be “solved” or “fixed” by technological innovation

In order to ascertain the parameters of such a litigation strategy, it is crucial that we comprehend more concretely what falls within the “digital welfare state.” 

To aid this process, DFF has tried to distil into a visualisation the different components of the concept based on conversations we have had with digital rights organisations, academics, and welfare rights groups. This blog highlights some of the terms and concepts mapped out on the visualisation.

What is the Welfare State?

In order to grasp the implications of the “digital welfare state,” it is useful to take a step back and first define and determine the scope of the “welfare state” itself.

The term “welfare state” is a catch-all, and sometimes contentious, term used to describe policies, programmes and practices that are aimed at providing social protection to individuals. It is a fundamental dimension of modern government, and it is ultimately to the benefit of everyone in society.

Misinformation, myths and misunderstandings around the term have led to it being narrowly applied to describe those aspects of social protection that are politically controversial and least popular (e.g. “handouts,” “dependency,” “doles,” etc.).

In reality, it encompasses a broad range of activities and measures that make up the “social safety net” that allows all individuals to benefit from minimum standards of health, social well-being, and economic security. In the visualisation, a number of these are highlighted:

  • Social Security & Monetary Assistance: This is perhaps the aspect of the “welfare state” that immediately comes to mind, often because it is the area that garners most media and political attention. It is often viewed in a narrow sense as non-contributory means-tested relief (e.g. food stamps, jobseekers’ allowance, disability benefits). However, the “welfare state” also includes other insurance-based or contributory forms of financial assistance (e.g. social security, national insurance, health insurance, pensions), which help protect individuals against the risk of losing earnings by reason of unemployment, poor health, and old age. Some monetary assistance can also support access to education (e.g. student stipends) and access to justice (e.g. legal aid).
  • Health and Social Care: In some countries, health and social care also falls within the umbrella of the “welfare state.” These are publicly funded services aimed at treating those with ill health and medical conditions, as well as providing physical, emotional and social support to the most vulnerable members of our society. It also includes government services aimed at safeguarding vulnerable children and adults.

  • Employment Assistance (or Access to Employment): As well as providing monetary assistance to those looking for employment, some governments also provide additional services aimed at getting unemployed individuals into work. This may be through the provision of training and development courses, or programmes that rely on a combination of job search obligations and assistance. At a macro level, governments can also promote employment through their role in economic governance, shaping markets and promoting growth.

  • Access to Housing: This involves the provision of services facilitating access to housing and shelter to those in need, for example through social housing programmes. These can be services aimed at those who are homeless, but in some jurisdictions the availability of social housing is not necessarily limited to the low-paid or unemployed.

  • Access to Education: The most common type of education-related welfare policy is the public provision of basic education, including through state-subsidised tertiary level education. However, access to education can also be facilitated by other public services, such as public libraries.

The Rise of Digitisation

It is difficult to find any aspect of the welfare state untouched by digital transformation in recent years, from online application forms to algorithms that profile applicants for certain kinds of support.

It is difficult to find any aspect of the welfare state untouched by digital transformation

Governments argue that these digital tools can increase efficiency and transparency, save money for taxpayers, and increase overall well-being.

However, as has been observed by the UN Special Rapporteur on extreme poverty and human rights, Professor Philip Alston, the rise of digitisation has been accompanied by considerable reductions in welfare budgets, the introduction of demanding and intrusive forms of conditionality, the processing of huge quantities of personal and sensitive data, and the obfuscation of critical decision-making processes. 

Furthermore, as has been observed by Virginia Eubanks in her seminal book on the topic Automating Inequality, data collection in the welfare context often reinforces the marginality of those accessing public benefits by targeting them for extra scrutiny and suspicion.

Furthermore, this data-driven regime is used to constrict opportunities, demobilise political organising, limit movement, and undermine human rights.

Data collection in the welfare context often reinforces the marginality of those accessing public benefits by targeting them for extra scrutiny and suspicion

Looking at welfare provision on a more granular level, where are we seeing increased digitisation and the rise of hi-tech tools?

Identity Verification

Establishing a person’s identity is a central part of social protection provision. 

However, in recent years, there has been a move from paper and/or plastic forms of ID towards digital identity systems. A number of benefits have been highlighted by those in favour of digital IDs, from improving access to welfare services for some individuals, to savings for citizens and efficiency. 

However, others have raised concerns about the privacy and (cyber)security implications of digital IDs – particularly as governments are pushing for the inclusion of more categories of data, and are utilising more intrusive technologies, when rolling out these systems.

Application and Communications

There is a drive for all aspects of the welfare application process, including interactions and communications between applicants and welfare authorities, to be done through online and digital portals. 

Human contact is increasingly being replaced by chatbots and websites. Following his visit to the UK, Philip Alston observed that the “British welfare state is gradually disappearing behind a webpage and an algorithm, with significant implications for those living in poverty.” 

It substitutes human discretion and diligence with inflexible rule-based code and performance metrics

He highlighted the difficulties this posed for certain individuals, particularly those with limited access to the internet and a lack of digital literacy skills. These digital systems can also be confusing, opaque and error prone, making it difficult to understand how decisions are made and challenge those that are incorrectly reached.

Furthermore, by replacing human contact with digital alternatives, it dehumanises the process of providing vital support to those most vulnerable and in need. It substitutes human discretion and diligence with inflexible rule-based code and performance metrics. 

As has been observed by Lupita Svensson, a senior lecturer at Lund University’s School of Social Work, “the legal text [in Sweden] about financial aid gave social workers a great deal of room to manoeuvre, since the law was saying that you couldn’t generalise. When this law is converted to code, it becomes clear that social work has changed. By converting law to software, the nature of financial aid changes, as you can’t maintain the same individual assessments as before.” She has also noted that many caseworkers view the increased automation as a threat to their own profession and livelihoods as well.

Eligibility and Needs Assessments

The two decision-making processes in social protection provision that are increasingly digitised are eligibility and needs assessments.

The first of these decision-making processes is focused on whether an individual is entitled to social protection in the first place. In Finland, for example, automated systems are used to check whether information provided by an applicant is sufficient, valid and trustworthy, as well as whether other benefits affect or are affected by the benefit applied for.

Faulty eligibility decisions caused by a glitch in the system or incorrect data can have a devastating impact on those who need access

Faulty eligibility decisions caused by a glitch in the system or incorrect data can have a devastating impact on those who need access to essential services. 

In India, the government has rolled out a 12- digit unique identification number, Aadhaar, that is linked to biometric and demographic data. It is reportedly the world’s largest biometric ID system, and it is used to manage access to welfare support such as food rationing. 

The system has been known to frequently suffer from technical difficulties and errors preventing people from accessing welfare they are otherwise entitled to and, in some extreme instances, this has resulted in starvation-related deaths. For example, last year, a man in Dumka, Jharkhand passed away after his food rations were stopped because his fingerprint was failing to register on the system. 

In some extreme instances, this has resulted in starvation-related deaths

Despite the 2018 judgment of the Indian Supreme Court broadly upholding the constitutionality of Aadhaar in the context of welfare provision, there is ongoing public interest litigation before the Supreme Court challenging the mandatory use of the Aadhaar authentication method when distributing food rations.

The second common type of decision-making process in social protection provision is the needs assessment. This is the process whereby welfare authorities try to determine what kind of support or assistance an individual might need based on their current circumstances. 

Human assessment is increasingly being replaced by inscrutable algorithmic and statistical models that process individuals’ data to assess what their welfare needs might be. These models produce profiles or scores that are then used to determine what assistance or services should be provided to the individual. 

This switch from human assessment to digital has resulted in many arbitrary and unfair results, leaving individuals and families in dangerous and precarious conditions. In Arkansas, for example, many low-income individuals living with disabilities noticed drastic cuts had been made to their Medicaid attendant care hours after needs assessments, which had previously been carried out by trained nurses, were conducted by a secret algorithm.

Calculation, Payments and Matching

As well as assessing an individual’s needs, computer programs are also relied on to calculate and pay benefits to welfare recipients with little or no human involvement. 

The Spanish Public Employment Service, for example, has reportedly been using an automated system to calculate unemployment benefits. 

In Sweden, some parental benefits and dental care subsidies are now allocated without any human intervention

In the UK, automated tools have been used to calculate personal budgets for disabled individuals who qualify for direct payments instead of local authorities providing care services to them directly. 

In Sweden, some parental benefits and dental care subsidies are now allocated without any human intervention, while in Denmark, the provision of student stipends is almost entirely automated. A student’s online application is matched with the fact that they have been accepted into a qualifying course for such a stipend, and then the funds are transferred directly to their bank account.

The means by which individuals can actually withdraw or spend their monetary assistance has similarly become more digitised and data driven, with smart debit cards being deployed by governments across the globe. 

Examples of this include the a2i programme in Bangladesh providing social assistance through pre-paid debit cards linked to an individual’s biometric data, and the Asylum Support Enablement (ASPEN) card in the UK that facilitates asylum seekers’ access to monetary assistance while they are awaiting a decision on their applications. 

Governments deploy these cards in partnership with private companies, who offer products and services to monitor and surveil welfare recipients.

In the UK, for example, the ASPEN card was used to track the whereabouts of asylum seekers and penalise them for venturing out of their “authorised” cities. 

The ASPEN card was used to track the whereabouts of asylum seekers and penalise them for venturing out of their “authorised” cities

In Maine, data that had been collected from electronic benefits transfer (EBT) cards, showing that money had been withdrawn in liquor stores and smoke shops, was used by the Governor to paint a picture that welfare recipients were defrauding taxpayers by purchasing liquor, lottery tickets and cigarettes. This fuelled reforms that placed tighter restrictions around cash withdrawals, despite the fact that the data labelled “suspicious” by the Governor only represented 0.03% of all cash withdrawals.

Algorithms have also been relied on to match individuals to available welfare resources, services or assistance. 

One controversial example is the matching algorithms that have been used to allocate housing opportunities, and other available homeless services, on the basis of how vulnerable an individual is ranked among the homeless population. These systems have required homeless people to give up intimate details of their private lives, with some recounting that they feel like they are “giving up their human right to privacy in return for their human right to housing.”

These systems have required homeless people to give up intimate details of their private lives

Lines of code
 

Fraud Detection and Risk Models

The rise of predictive algorithms and sophisticated risk models has prompted governments to adopt automated fraud prevention and detection tools in the welfare context. 

A high-profile example has been the SyRI system in the Netherlands. This involved the application of a “black box” risk calculation model to vast quantities of personal data, merged from various government bodies, for the purpose of preventing and combatting fraud in areas such as social security, tax and labour law.

The risk profiles that were generated by the secret model were used to identify those deemed a higher risk of committing such fraud. As well as intruding upon welfare recipients’ private lives, the SyRI system had been shown to be consistently rolled out in poorer and more vulnerable areas. This resulted in certain communities being further stigmatised, stereotyped and subjected to increased scrutiny.

The SyRI system had been shown to be consistently rolled out in poorer and more vulnerable areas

Earlier this year, the District Court of the Hague overturned the legal basis for SyRI on human rights grounds.

Risk models have also been relied on to identify “problem” families for attention from child services. Last year, local councils in the UK had processed the personal data of hundreds of thousands of people to construct computer models in an effort to predict child abuse and intervene before it happens. 

These systems have been heavily criticised for relying on highly subjective proxies, such as assessments made by caseworkers or the courts, to measure whether the system is accurately predicting child maltreatment. 

Furthermore, they target individuals for extra scrutiny based not on their behaviour but because they live in poverty. Virginia Eubanks refers to this phenomenon as “poverty profiling.” This is caused, in part, by the fact that the child welfare services are acting both as the provider of family support and investigator of maltreatment. 

Even though these services are not always means-tested, middle-class families have the option of avoiding the additional surveillance and data gathering of the child welfare services by accessing private sources for family support.

Debt Recovery

Another trend in automation that has had a significant impact on welfare recipients is the use of “robo-debt” collection techniques. These systems apply data matching and algorithmic processes to claw back welfare debts from people flagged as having been overpaid by the government. Some of these “overpayments” are “zombie debts” that stretch back decades.

The Australian “robo-debt” scandal exposed the fragility and harm associated with these systems

The Australian “robo-debt” scandal exposed the fragility and harm associated with these systems. The automated tool, referred to as the online compliance intervention (OCI) debt recovery system, effectively shifted the onus onto vulnerable welfare recipients to prove that they did not, in fact, owe a debt to the government. A flawed process in calculating debts saw thousands of individuals receiving incorrect debt notices. 

Furthermore, out of date and inaccurate data saw letters being sent to welfare recipients’ old addresses, and these individuals were then penalised for their failure to respond to the correspondence. 

These errors and failures in the system caused significant levels of stress, anxiety and depression among many vulnerable individuals. The Australian government has admitted that aspects of the scheme were unlawful in litigation before the federal court, and is reportedly preparing to settle another ongoing class action on the programme.

Jobseeker Surveillance

We have seen in recent years that companies are increasingly relying on new methods for monitoring and tracking employee productivity as new workplace surveillance tools become available. Similar tools can also be seen in the jobseeker environment, where incentives and programmes are deployed to facilitate re-entry into the job market for those experiencing periods of unemployment. 

For example, in Belgium, the public employment service of Flanders (VDAB) has been accused of building the “Amazon of the labour market.” It has utilised algorithms to assess how people search for jobs on their website and has then used this to provide individuals with recommendations of suitable job opportunities. 

The data collected in this process has also been used to follow up with, and in some instances penalise, individuals who are found not to be seeking job opportunities actively enough.

Help Us Complete the Picture

In this blog, we have tried to sketch a picture of what the “digital welfare state” looks like. 

We realise that this picture is incomplete. If you have suggestions of additions or changes you would make to this visualisation, or if you want to talk to us more generally about your work on the “digital welfare state” or how you can get involved in co-creating a litigation strategy with us, please feel free to reach out to us. We would love to hear from you!

Image by Markus Spiske on Unsplash

Towards a Litigation Strategy on the Digital Welfare State

By Jonathan McCully, 23rd April 2020

Panopticon-like building with something like an eye at the top

Following the DFF strategy meeting in February, we held an in-depth consultation on the “digital welfare state.”

During this consultation, representatives from international human rights organisations, welfare charities, academia, and the digital rights field discussed how we might go about defining the “digital welfare state”.

We surveyed what work is already being done on the issue, and what our shared objectives might be for holding governments to account for digital rights violations in the welfare context.

What is the “Digital Welfare State”?

During the consultation, participants were invited to critique a visual representation of the “digital welfare state,” assembled by DFF following conversations with organisations working at the intersection of digital rights and social protection provision.

Many of the participants noted that key aspects of the “digital welfare state” they were working on were reflected in the visualisation. Nonetheless, a number of pertinent observations were made on how to define this emerging concept.

Some participants noted that the term “welfare,” in itself, is context specific and can be a highly politicised term. Other participants noted that the visualisation implied a process of applying for social protection, when some countries proactively or automatically provide individuals with monetary assistance and other services without an individual having to apply for them. These proactive procedures are often fuelled by the processing of citizens’ data that has already been collected by the state in various other contexts.

…the term “welfare,” in itself, is context specific and can be a highly politicised term

There were a number of aspects identified as missing from the visualisation. For instance, the visualisation could be adapted to include the use of digital and automated decision-making tools in the context of handling disputes and appeals of welfare decisions.

In the UK, for example, the Child Poverty Action Group has published a report entitled “Computer Says ‘No’”, which highlights the problems experienced by claimants trying to dispute or challenge a decision on their Universal Credit reward through online portals. Other participants noted that some services facilitating access to justice, such as free legal advice, could also fall within the definition of the “welfare state.”

Participants also highlighted certain issues that were important to keep in mind when looking at the “digital welfare state.” For instance, migrants, asylum seekers, refugees and stateless persons can face particular difficulties in exercising their rights to social protection and may even be targeted with certain digital tools.

Migrants, asylum seekers, refugees and stateless persons can face particular difficulties in exercising their rights to social protection

Also, access to the internet is not universal, and welfare recipients in many jurisdictions are simply unable to access online portals to manage their welfare provision or challenge decisions made against them.

Furthermore, many of the digital tools being deployed are designed, built and sometimes even run by private entities. These private entities can hide behind trade secrets and intellectual property, evading the level of accountability we would expect from welfare authorities.

There was broad agreement that some digital tools may genuinely improve access to social protection. However, we must always scrutinise the heightened surveillance and data security concerns that accompany such tools. Where does the data used to build these digital tools come from? Has data collected for welfare purposes been processed securely and lawfully? Does it comply with the principles of data minimisation and purpose limitation? These are the key questions we should ask ourselves when we come across digital systems in the welfare context.

We must always scrutinise the heightened surveillance and data security concerns that accompany such tools

Towards a Shared Vision of “Digital Welfare”

Participants working on a range of “digital welfare” issues, from those supporting welfare claimants in navigating the digital interfaces put in place by welfare authorities to those who are advocating for data protection and privacy across a range of government services, discussed what their shared vision was when it came to the “digital welfare state.” A number of goals for this work were identified.

There was convergence around the principle that digital tools used in the welfare context should be human rights respecting “by design,” safeguarding individuals against violations to their rights to privacy, data protection, non-discrimination, and dignity.

Such systems and tools should be inclusive by default, meaning that the starting point should always be that it is accessible to everyone. It should not be a requirement that you be digitally literate or have access to the internet in order to access social protection. Instead, there should always be accessible offline alternatives to the digital tools. Digital tools should not shift the burden of proving eligibility or need onto individuals, and welfare recipients should have full control over the information they share with welfare authorities.

Digital tools should not shift the burden of proving eligibility or need onto individuals

There was also broad agreement around digital tools needing to be transparent and open to review, either by way of freedom of information requests or by making the tools open source.

Where Next?

The conversations we held in February feed into our work in building a litigation strategy that can contribute towards ensuring social welfare policies and practices in the era of new technology respect and protect human rights.

In the coming months, we would like to speak to as many individuals and organisations working on this topic as we possibly can to help us further define the parameters of such a litigation strategy. If you are interested in getting involved, we would welcome your views and input. Get in touch with us!

Image by Antonio Esteo on Pexels