Towards a Litigation Strategy on the Digital Welfare State

By Jonathan McCully, 23rd April 2020

Panopticon-like building with something like an eye at the top

Following the DFF strategy meeting in February, we held an in-depth consultation on the “digital welfare state.”

During this consultation, representatives from international human rights organisations, welfare charities, academia, and the digital rights field discussed how we might go about defining the “digital welfare state”.

We surveyed what work is already being done on the issue, and what our shared objectives might be for holding governments to account for digital rights violations in the welfare context.

What is the “Digital Welfare State”?

During the consultation, participants were invited to critique a visual representation of the “digital welfare state,” assembled by DFF following conversations with organisations working at the intersection of digital rights and social protection provision.

Many of the participants noted that key aspects of the “digital welfare state” they were working on were reflected in the visualisation. Nonetheless, a number of pertinent observations were made on how to define this emerging concept.

Some participants noted that the term “welfare,” in itself, is context specific and can be a highly politicised term. Other participants noted that the visualisation implied a process of applying for social protection, when some countries proactively or automatically provide individuals with monetary assistance and other services without an individual having to apply for them. These proactive procedures are often fuelled by the processing of citizens’ data that has already been collected by the state in various other contexts.

…the term “welfare,” in itself, is context specific and can be a highly politicised term

There were a number of aspects identified as missing from the visualisation. For instance, the visualisation could be adapted to include the use of digital and automated decision-making tools in the context of handling disputes and appeals of welfare decisions.

In the UK, for example, the Child Poverty Action Group has published a report entitled “Computer Says ‘No’”, which highlights the problems experienced by claimants trying to dispute or challenge a decision on their Universal Credit reward through online portals. Other participants noted that some services facilitating access to justice, such as free legal advice, could also fall within the definition of the “welfare state.”

Participants also highlighted certain issues that were important to keep in mind when looking at the “digital welfare state.” For instance, migrants, asylum seekers, refugees and stateless persons can face particular difficulties in exercising their rights to social protection and may even be targeted with certain digital tools.

Migrants, asylum seekers, refugees and stateless persons can face particular difficulties in exercising their rights to social protection

Also, access to the internet is not universal, and welfare recipients in many jurisdictions are simply unable to access online portals to manage their welfare provision or challenge decisions made against them.

Furthermore, many of the digital tools being deployed are designed, built and sometimes even run by private entities. These private entities can hide behind trade secrets and intellectual property, evading the level of accountability we would expect from welfare authorities.

There was broad agreement that some digital tools may genuinely improve access to social protection. However, we must always scrutinise the heightened surveillance and data security concerns that accompany such tools. Where does the data used to build these digital tools come from? Has data collected for welfare purposes been processed securely and lawfully? Does it comply with the principles of data minimisation and purpose limitation? These are the key questions we should ask ourselves when we come across digital systems in the welfare context.

We must always scrutinise the heightened surveillance and data security concerns that accompany such tools

Towards a Shared Vision of “Digital Welfare”

Participants working on a range of “digital welfare” issues, from those supporting welfare claimants in navigating the digital interfaces put in place by welfare authorities to those who are advocating for data protection and privacy across a range of government services, discussed what their shared vision was when it came to the “digital welfare state.” A number of goals for this work were identified.

There was convergence around the principle that digital tools used in the welfare context should be human rights respecting “by design,” safeguarding individuals against violations to their rights to privacy, data protection, non-discrimination, and dignity.

Such systems and tools should be inclusive by default, meaning that the starting point should always be that it is accessible to everyone. It should not be a requirement that you be digitally literate or have access to the internet in order to access social protection. Instead, there should always be accessible offline alternatives to the digital tools. Digital tools should not shift the burden of proving eligibility or need onto individuals, and welfare recipients should have full control over the information they share with welfare authorities.

Digital tools should not shift the burden of proving eligibility or need onto individuals

There was also broad agreement around digital tools needing to be transparent and open to review, either by way of freedom of information requests or by making the tools open source.

Where Next?

The conversations we held in February feed into our work in building a litigation strategy that can contribute towards ensuring social welfare policies and practices in the era of new technology respect and protect human rights.

In the coming months, we would like to speak to as many individuals and organisations working on this topic as we possibly can to help us further define the parameters of such a litigation strategy. If you are interested in getting involved, we would welcome your views and input. Get in touch with us!

Image by Antonio Esteo on Pexels

The SyRI Victory: Holding Profiling Practices to Account

By Tijmen Wisman, 23rd April 2020

Messy black and white illustration of surveillance cameras

This article was co-authored by Merel Hendrickx and Tijmen Wisman.

Profiling is a widespread practice in government agencies. It is difficult to reject this sort of practice outright, since much depends on the duties of the agency, the activities they are carrying out, and the safeguards they have put in place. In some circumstances, these practices may even be pursuing legitimate purposes.

Nonetheless, profiling practices are becoming more concerning as both technological potential and data availability expand.

With this comes greater power, and with this concentration of power comes a greater need for clear delimitations and more accountability to protect citizens against arbitrary use of their data.

The judgment in the Dutch SyRI case is one of the first steps towards acquiring this increase in legal protection for citizens in the face of ongoing technological developments.

SyRI, or System Risk Indicator, is a risk profiling method employed by the Dutch government to detect individual risks of welfare, tax and other types of fraud. The law authorising SyRI was passed in the Parliament and Senate in 2014 without a single politician voting against it. This was despite significant objections from the Dutch Data Protection Authority and the Council of State, both of which considered that the purposes for which SyRI could be deployed, and the data that could be used in the system, were likely to expand government powers and maximise executive discretion.

SyRI thus eroded the relationship of trust between citizens and government, because almost all data given to government in a wide variety of relationships could end up in this system and it was impossible for a citizen to find out whether his or her data was actually used.

SyRI thus eroded the relationship of trust between citizens and government

The SyRI system offered immense informational power that could be deployed by the Dutch state against ordinary citizens by bringing databases of different executive bodies together and effectively building dossiers on citizens. Moreover, in practice, SyRI was primarily deployed in poor neighbourhoods. This meant that all the people in these communities were targeted for an invasive search of their personal data through digital means.

The SyRI case was taken jointly by the Platform for the Protection of Civil Rights (Platform Bescherming Burgerrechten), the Public Interest Litigation Project of the Dutch Section of the International Commission of Jurists (PILP-NJCM), and other civil society organisations with a shared interest to set a legal precedent for the protection of citizens against risk profiling and predictive policing practices. Two famous Dutch writers that were critical of SyRI were also asked to join as complainants in the procedure, and the Platform for the Protection of Civil Rights launched a publicity campaign on SyRI and the case.

Due to the publicity campaign, and the appearance of one of the complainants in a popular talk show, the largest Dutch trade union FNV became aware of the case and joined the coalition in July 2018. This created two opportunities. First, we had an extra round to strengthen our arguments in the proceedings, where our lawyers focused their efforts on the GDPR and the relevant provisions on automated decision-making in particular. Second, access to the FNV network gave us the opportunity to be in direct contact with the people who were subjected to the SyRI system and those representing them.

The SyRI system offered immense informational power that could be deployed by the Dutch state against ordinary citizens

The collaboration between the Platform, PILP-NJCM and the FNV around SyRI in Rotterdam turned out to be a great success. The FNV was new to the subject of digital rights, but the union did have a strong network of active union members and volunteers in Rotterdam. With their help (flyer campaigns, a neighbourhood meeting, information leaflets and posters), our knowledge of SyRI could be used effectively to generate critical attention in both the media and local politics.

Eventually, the Mayor of Rotterdam stopped the use of the SyRI system by the local authorities in July 2019. Looking back, the first “battle” in the fight against SyRI was won in Rotterdam, and it definitely set the tone for the future coverage of SyRI in the media.

The involvement of FNV, and the Rotterdam neighbourhood targeted by SyRI, showed how risk profiling instruments are being used against the poorest in society. Philip Alston, the United Nations Special Rapporteur on extreme poverty and human rights (UNSR) became aware of the SyRI case during DFF’s strategy meeting in February 2019. He wrote a very critical amicus curiae to the court, warning that many societies are “stumbling zombielike into a digital welfare dystopia”. The involvement of the UNSR increased public debate on SyRI and the case. It was all over the news.

He warned that many societies are “stumbling zombielike into a digital welfare dystopia”

This was followed by a landslide victory in court. The court held that technological advancements had meant that the right to protection of personal data had also gained in significance, echoing the adage of the European Court of Human Rights that the development of technology makes it essential to formulate clear and detailed data protection rules. The rules governing the deployment of SyRI were anything but detailed, effectively providing a great margin of appreciation to public authorities to browse through the private lives of citizens in a “black box” setting.

The court held that the secrecy of the risk models made it impossible for citizens to “defend themselves against the fact that a risk report has been submitted about him or her”. Even in cases where no risk reports are produced, the court held, citizens should be able to check whether their data were processed on correct grounds. This ability of citizens to defend themselves is one of the hallmarks of the rule of law, which is confirmed in this case and has significant implications for data management within public authorities.

Shortly after the verdict, public authorities in the Netherlands announced that they would critically revise their own fraud systems. The Dutch Employment Agency is reviewing its internal fraud systems in response to the judgment. The Dutch Tax Service ceased the operation of the fraud detection system after facing investigations of the Dutch Data Protection Authority. Furthermore, many municipal councils have started to question the legality of local systems that are comparable to SyRI.

In short, the SyRI case has set a timely precedent in which risk profiling practices are finally held to account

In short, the SyRI case has set a timely precedent in which risk profiling practices are finally held to account. The State Secretary for Social Affairs and Employment, Tamara van Ark, has indicated on the 23rd of April that she has decided not to appeal against the judgment. She did, however, state the intention to further explore the use of risk models within social security.

This underlines the need to keep in mind that these changes do not take place automatically but require continuous and concerted efforts from all of us. In that light, the work of NGOs in these turbid times is of the utmost importance and the support of DFF in funding and providing a network of professionals to cooperate with is indispensable.

In the last few months, the SyRI case has garnered widespread attention, both at national and international level. Thanks to our publicity campaign, we were able to inform the public about the importance of this case and the issues adjudicated upon. No longer is risk profiling a niche subject only raising the eyebrows of a small group of privacy lawyers and techies. In times of a pandemic where even constitutional democracies might consider risk profiling practices as one of the ways out, this broader understanding of the implications of risk profiling is much needed. 

We can conclude that now there is broader public debate on risk profiling, algorithms and using systems like SyRI. Citizens are starting to understand that the way their data is governed is essential for the relationship between them and the state. We consider this one of the biggest victories of this case, because people caring about their relationship with the state is a first step towards improving it.

Tijmen Wisman is Chairman of the Platform for Civil Rights and Assistant Professor of privacy law at the Vrije Universiteit of Amsterdam.

Merel Hendrickx is an in-house human rights Lawyer with the Public Interest Litigation Project of the Dutch Section of the International Commission of Jurists (PILP-NJCM).

Why COVID-19 is a Crisis for Digital Rights

By Nani Jansen Reventlow, 16th April 2020

Street art of a black face mask with "COVID-19" writing

The COVID-19 pandemic has triggered an equally urgent digital rights crisis.

New measures being hurried in to curb the spread of the virus, from “biosurveillance” and online tracking to censorship, are potentially as world-changing as the disease itself. These changes aren’t necessarily temporary, either: once in place, many of them can’t be undone.

That’s why activists, civil society and the courts must carefully scrutinise questionable new measures, and make sure that – even amid a global panic – states are complying with international human rights law.

Human rights watchdog Amnesty International recently commented that human rights restrictions are spreading almost as quickly as coronavirus itself. Indeed, the fast-paced nature of the pandemic response has empowered governments to rush through new policies with little to no legal  oversight.

There has already been a widespread absence of transparency and regulation when it comes to the rollout of these emergency measures, with many falling far short of international human rights standards.

Tensions between protecting public health and upholding people’s basic rights and liberties are rising. While it is of course necessary to put in place safeguards to slow the spread of the virus, it’s absolutely vital that these measures are balanced and proportionate.

Unfortunately, this isn’t always proving to be the case.

The Rise of Biosurveillance

A panopticon world on a scale never seen before is quickly materialising.

“Biosurveillance” which involves the tracking of people’s movements, communications and health data has already become a buzzword, used to describe certain worrying measures being deployed to contain the virus.

A panopticon world on a scale never seen before is quickly materialising

The means by which states, often aided by private companies, are monitoring their citizens are increasingly extensive: phone data, CCTV footage, temperature checkpoints, airline and railway bookings, credit card information, online shopping records, social media data, facial recognition, and sometimes even drones.

Private companies are exploiting the situation and offering rights-abusing products to states, purportedly to help them manage the impact of the pandemic. One Israeli spyware firm has developed a product it claims can track the spread of coronavirus by analysing two weeks’ worth of data from people’s personal phones, and subsequently matching it up with data about citizens’ movements obtained from national phone companies.

In some instances, citizens can also track each other’s movements leading to not only vertical, but also horizontal sharing of sensitive medical data.

Not only are many of these measures unnecessary and disproportionately intrusive, they also give rise to secondary questions, such as: how secure is our data? How long will it be kept for? Is there transparency around how it is obtained and processed? Is it being shared or repurposed, and if so, with who?

Censorship and Misinformation

Censorship is becoming rife, with many arguing that a “censorship pandemic” is surging in step with COVID-19.

Oppressive regimes are rapidly adopting “fake news” laws. This is ostensibly to curb the spread of misinformation about the virus, but in practice, this legislation is often used to crack down on dissenting voices or otherwise suppress free speech. In Cambodia, for example, there have already been at least 17 arrests of people for sharing information about coronavirus.

Oppressive regimes are rapidly adopting “fake news” laws

At the same time, many states have themselves been accused of fuelling disinformation to their citizens to create confusion, or are arresting those who express criticism of the government’s response.

As well as this, some states have restricted free access to information on the virus, either by blocking access to health apps, or cutting off access to the internet altogether.

An all-seeing, prisonlike panopticon

AI, Inequality and Control

The deployment of AI can have consequences for human rights at the best of times, but now, it’s regularly being adopted with minimal oversight and regulation.

AI and other automated learning technology are the foundation for many surveillance and social control tools. Because of the pandemic, it is being increasingly relied upon to fight misinformation online and process the huge increase in applications for emergency social protection which are, naturally, more urgent than ever.

Prior to the COVID-19 outbreak, the digital rights field had consistently warned about the human rights implications of these inscrutable “black boxes”, including their biased and discriminatory effects. The adoption of such technologies without proper oversight or consultation should be resisted and challenged through the courts, not least because of their potential to exacerbate the inequalities already experienced by those hardest hit by the pandemic.

Eroding Human Rights

Many of the human rights-violating measures that have been adopted to date are taken outside the framework of proper derogations from applicable human rights instruments, which would ensure that emergency measures are temporary, limited and supervised.

Legislation is being adopted by decree, without clear time limitations

Legislation is being adopted by decree, without clear time limitations, and technology is being deployed in a context where clear rules and regulations are absent.

This is of great concern for two main reasons.

First, this type of “legislating through the back door” of measures that are not necessarily temporary avoids going through a proper democratic process of oversight and checks and balances, resulting in de facto authoritarian rule.

Second, if left unchecked and unchallenged, this could set a highly dangerous precedent for the future. This is the first pandemic we are experiencing at this scale – we are currently writing the playbook for global crises to come.

If it becomes clear that governments can use a global health emergency to instate human rights infringing measures without being challenged or without having to reverse these measures, making them permanent instead of temporary, we will essentially be handing over a blank cheque to authoritarian regimes to wait until the next pandemic to impose whatever measures they want.

We are currently writing the playbook for global crises to come

Therefore, any and all measures that are not strictly necessary, sufficiently narrow in scope, and of a clearly defined temporary nature, need to be challenged as a matter of urgency. If they are not, we will not be able to push back on a certain path towards a dystopian surveillance state.

Litigation: New Ways to Engage

In tandem with advocacy and policy efforts, we will need strategic litigation to challenge the most egregious measures through the court system. Going through the legislature alone will be too slow and, with public gatherings banned, public demonstrations will not be possible at scale.

The courts will need to adapt to the current situation – and are in the process of doing so – by offering new ways for litigants to engage. Courts are still hearing urgent matters and questions concerning fundamental rights and our democratic system will fall within that remit. This has already been demonstrated by the first cases requesting oversight to government surveillance in response to the pandemic.

These issues have never been more pressing, and it’s abundantly clear that action must be taken.

At DFF, we’re here to help. If you have an idea for a case or for litigation, please apply for a grant now.

Images by Adam Niescioruk on Unsplash and I. Friman on Wikipedia