Explainer: What is the “Digital Welfare State”?

By Jonathan McCully, 27th April 2020

Lines of code

Over the past decade, governments across the globe have adopted strategies to transform public services through technology.

One priority area for these strategies has been the “welfare state,” with digital tools being rolled out in this sector with minimal public debate or accountability.

These technologies, many of which are powered by machine learning and automated decision-making systems, turn the needs of the most vulnerable in our society into numbers and variables. Poverty and vulnerability are reduced to problems that can be “solved” or “fixed” by technological innovation.

From the implementation of these strategies has emerged the “digital welfare state”.

At the Digital Freedom Fund, we have been exploring how strategic litigation could be engaged to prevent, remedy and safeguard against human rights violations caused by the “digital welfare state”. This year, DFF will work towards co-developing a litigation strategy on the topic in consultation with relevant organisations and stakeholders. If you would be interested in helping us with this work, we would very much welcome your input and involvement.

Poverty and vulnerability are reduced to problems that can be “solved” or “fixed” by technological innovation

In order to ascertain the parameters of such a litigation strategy, it is crucial that we comprehend more concretely what falls within the “digital welfare state.” 

To aid this process, DFF has tried to distil into a visualisation the different components of the concept based on conversations we have had with digital rights organisations, academics, and welfare rights groups. This blog highlights some of the terms and concepts mapped out on the visualisation.

What is the Welfare State?

In order to grasp the implications of the “digital welfare state,” it is useful to take a step back and first define and determine the scope of the “welfare state” itself.

The term “welfare state” is a catch-all, and sometimes contentious, term used to describe policies, programmes and practices that are aimed at providing social protection to individuals. It is a fundamental dimension of modern government, and it is ultimately to the benefit of everyone in society.

Misinformation, myths and misunderstandings around the term have led to it being narrowly applied to describe those aspects of social protection that are politically controversial and least popular (e.g. “handouts,” “dependency,” “doles,” etc.).

In reality, it encompasses a broad range of activities and measures that make up the “social safety net” that allows all individuals to benefit from minimum standards of health, social well-being, and economic security. In the visualisation, a number of these are highlighted:

  • Social Security & Monetary Assistance: This is perhaps the aspect of the “welfare state” that immediately comes to mind, often because it is the area that garners most media and political attention. It is often viewed in a narrow sense as non-contributory means-tested relief (e.g. food stamps, jobseekers’ allowance, disability benefits). However, the “welfare state” also includes other insurance-based or contributory forms of financial assistance (e.g. social security, national insurance, health insurance, pensions), which help protect individuals against the risk of losing earnings by reason of unemployment, poor health, and old age. Some monetary assistance can also support access to education (e.g. student stipends) and access to justice (e.g. legal aid).
  • Health and Social Care: In some countries, health and social care also falls within the umbrella of the “welfare state.” These are publicly funded services aimed at treating those with ill health and medical conditions, as well as providing physical, emotional and social support to the most vulnerable members of our society. It also includes government services aimed at safeguarding vulnerable children and adults.

  • Employment Assistance (or Access to Employment): As well as providing monetary assistance to those looking for employment, some governments also provide additional services aimed at getting unemployed individuals into work. This may be through the provision of training and development courses, or programmes that rely on a combination of job search obligations and assistance. At a macro level, governments can also promote employment through their role in economic governance, shaping markets and promoting growth.

  • Access to Housing: This involves the provision of services facilitating access to housing and shelter to those in need, for example through social housing programmes. These can be services aimed at those who are homeless, but in some jurisdictions the availability of social housing is not necessarily limited to the low-paid or unemployed.

  • Access to Education: The most common type of education-related welfare policy is the public provision of basic education, including through state-subsidised tertiary level education. However, access to education can also be facilitated by other public services, such as public libraries.

The Rise of Digitisation

It is difficult to find any aspect of the welfare state untouched by digital transformation in recent years, from online application forms to algorithms that profile applicants for certain kinds of support.

It is difficult to find any aspect of the welfare state untouched by digital transformation

Governments argue that these digital tools can increase efficiency and transparency, save money for taxpayers, and increase overall well-being.

However, as has been observed by the UN Special Rapporteur on extreme poverty and human rights, Professor Philip Alston, the rise of digitisation has been accompanied by considerable reductions in welfare budgets, the introduction of demanding and intrusive forms of conditionality, the processing of huge quantities of personal and sensitive data, and the obfuscation of critical decision-making processes. 

Furthermore, as has been observed by Virginia Eubanks in her seminal book on the topic Automating Inequality, data collection in the welfare context often reinforces the marginality of those accessing public benefits by targeting them for extra scrutiny and suspicion.

Furthermore, this data-driven regime is used to constrict opportunities, demobilise political organising, limit movement, and undermine human rights.

Data collection in the welfare context often reinforces the marginality of those accessing public benefits by targeting them for extra scrutiny and suspicion

Looking at welfare provision on a more granular level, where are we seeing increased digitisation and the rise of hi-tech tools?

Identity Verification

Establishing a person’s identity is a central part of social protection provision. 

However, in recent years, there has been a move from paper and/or plastic forms of ID towards digital identity systems. A number of benefits have been highlighted by those in favour of digital IDs, from improving access to welfare services for some individuals, to savings for citizens and efficiency. 

However, others have raised concerns about the privacy and (cyber)security implications of digital IDs – particularly as governments are pushing for the inclusion of more categories of data, and are utilising more intrusive technologies, when rolling out these systems.

Application and Communications

There is a drive for all aspects of the welfare application process, including interactions and communications between applicants and welfare authorities, to be done through online and digital portals. 

Human contact is increasingly being replaced by chatbots and websites. Following his visit to the UK, Philip Alston observed that the “British welfare state is gradually disappearing behind a webpage and an algorithm, with significant implications for those living in poverty.” 

It substitutes human discretion and diligence with inflexible rule-based code and performance metrics

He highlighted the difficulties this posed for certain individuals, particularly those with limited access to the internet and a lack of digital literacy skills. These digital systems can also be confusing, opaque and error prone, making it difficult to understand how decisions are made and challenge those that are incorrectly reached.

Furthermore, by replacing human contact with digital alternatives, it dehumanises the process of providing vital support to those most vulnerable and in need. It substitutes human discretion and diligence with inflexible rule-based code and performance metrics. 

As has been observed by Lupita Svensson, a senior lecturer at Lund University’s School of Social Work, “the legal text [in Sweden] about financial aid gave social workers a great deal of room to manoeuvre, since the law was saying that you couldn’t generalise. When this law is converted to code, it becomes clear that social work has changed. By converting law to software, the nature of financial aid changes, as you can’t maintain the same individual assessments as before.” She has also noted that many caseworkers view the increased automation as a threat to their own profession and livelihoods as well.

Eligibility and Needs Assessments

The two decision-making processes in social protection provision that are increasingly digitised are eligibility and needs assessments.

The first of these decision-making processes is focused on whether an individual is entitled to social protection in the first place. In Finland, for example, automated systems are used to check whether information provided by an applicant is sufficient, valid and trustworthy, as well as whether other benefits affect or are affected by the benefit applied for.

Faulty eligibility decisions caused by a glitch in the system or incorrect data can have a devastating impact on those who need access

Faulty eligibility decisions caused by a glitch in the system or incorrect data can have a devastating impact on those who need access to essential services. 

In India, the government has rolled out a 12- digit unique identification number, Aadhaar, that is linked to biometric and demographic data. It is reportedly the world’s largest biometric ID system, and it is used to manage access to welfare support such as food rationing. 

The system has been known to frequently suffer from technical difficulties and errors preventing people from accessing welfare they are otherwise entitled to and, in some extreme instances, this has resulted in starvation-related deaths. For example, last year, a man in Dumka, Jharkhand passed away after his food rations were stopped because his fingerprint was failing to register on the system. 

In some extreme instances, this has resulted in starvation-related deaths

Despite the 2018 judgment of the Indian Supreme Court broadly upholding the constitutionality of Aadhaar in the context of welfare provision, there is ongoing public interest litigation before the Supreme Court challenging the mandatory use of the Aadhaar authentication method when distributing food rations.

The second common type of decision-making process in social protection provision is the needs assessment. This is the process whereby welfare authorities try to determine what kind of support or assistance an individual might need based on their current circumstances. 

Human assessment is increasingly being replaced by inscrutable algorithmic and statistical models that process individuals’ data to assess what their welfare needs might be. These models produce profiles or scores that are then used to determine what assistance or services should be provided to the individual. 

This switch from human assessment to digital has resulted in many arbitrary and unfair results, leaving individuals and families in dangerous and precarious conditions. In Arkansas, for example, many low-income individuals living with disabilities noticed drastic cuts had been made to their Medicaid attendant care hours after needs assessments, which had previously been carried out by trained nurses, were conducted by a secret algorithm.

Calculation, Payments and Matching

As well as assessing an individual’s needs, computer programs are also relied on to calculate and pay benefits to welfare recipients with little or no human involvement. 

The Spanish Public Employment Service, for example, has reportedly been using an automated system to calculate unemployment benefits. 

In Sweden, some parental benefits and dental care subsidies are now allocated without any human intervention

In the UK, automated tools have been used to calculate personal budgets for disabled individuals who qualify for direct payments instead of local authorities providing care services to them directly. 

In Sweden, some parental benefits and dental care subsidies are now allocated without any human intervention, while in Denmark, the provision of student stipends is almost entirely automated. A student’s online application is matched with the fact that they have been accepted into a qualifying course for such a stipend, and then the funds are transferred directly to their bank account.

The means by which individuals can actually withdraw or spend their monetary assistance has similarly become more digitised and data driven, with smart debit cards being deployed by governments across the globe. 

Examples of this include the a2i programme in Bangladesh providing social assistance through pre-paid debit cards linked to an individual’s biometric data, and the Asylum Support Enablement (ASPEN) card in the UK that facilitates asylum seekers’ access to monetary assistance while they are awaiting a decision on their applications. 

Governments deploy these cards in partnership with private companies, who offer products and services to monitor and surveil welfare recipients.

In the UK, for example, the ASPEN card was used to track the whereabouts of asylum seekers and penalise them for venturing out of their “authorised” cities. 

The ASPEN card was used to track the whereabouts of asylum seekers and penalise them for venturing out of their “authorised” cities

In Maine, data that had been collected from electronic benefits transfer (EBT) cards, showing that money had been withdrawn in liquor stores and smoke shops, was used by the Governor to paint a picture that welfare recipients were defrauding taxpayers by purchasing liquor, lottery tickets and cigarettes. This fuelled reforms that placed tighter restrictions around cash withdrawals, despite the fact that the data labelled “suspicious” by the Governor only represented 0.03% of all cash withdrawals.

Algorithms have also been relied on to match individuals to available welfare resources, services or assistance. 

One controversial example is the matching algorithms that have been used to allocate housing opportunities, and other available homeless services, on the basis of how vulnerable an individual is ranked among the homeless population. These systems have required homeless people to give up intimate details of their private lives, with some recounting that they feel like they are “giving up their human right to privacy in return for their human right to housing.”

These systems have required homeless people to give up intimate details of their private lives

Lines of code
 

Fraud Detection and Risk Models

The rise of predictive algorithms and sophisticated risk models has prompted governments to adopt automated fraud prevention and detection tools in the welfare context. 

A high-profile example has been the SyRI system in the Netherlands. This involved the application of a “black box” risk calculation model to vast quantities of personal data, merged from various government bodies, for the purpose of preventing and combatting fraud in areas such as social security, tax and labour law.

The risk profiles that were generated by the secret model were used to identify those deemed a higher risk of committing such fraud. As well as intruding upon welfare recipients’ private lives, the SyRI system had been shown to be consistently rolled out in poorer and more vulnerable areas. This resulted in certain communities being further stigmatised, stereotyped and subjected to increased scrutiny.

The SyRI system had been shown to be consistently rolled out in poorer and more vulnerable areas

Earlier this year, the District Court of the Hague overturned the legal basis for SyRI on human rights grounds.

Risk models have also been relied on to identify “problem” families for attention from child services. Last year, local councils in the UK had processed the personal data of hundreds of thousands of people to construct computer models in an effort to predict child abuse and intervene before it happens. 

These systems have been heavily criticised for relying on highly subjective proxies, such as assessments made by caseworkers or the courts, to measure whether the system is accurately predicting child maltreatment. 

Furthermore, they target individuals for extra scrutiny based not on their behaviour but because they live in poverty. Virginia Eubanks refers to this phenomenon as “poverty profiling.” This is caused, in part, by the fact that the child welfare services are acting both as the provider of family support and investigator of maltreatment. 

Even though these services are not always means-tested, middle-class families have the option of avoiding the additional surveillance and data gathering of the child welfare services by accessing private sources for family support.

Debt Recovery

Another trend in automation that has had a significant impact on welfare recipients is the use of “robo-debt” collection techniques. These systems apply data matching and algorithmic processes to claw back welfare debts from people flagged as having been overpaid by the government. Some of these “overpayments” are “zombie debts” that stretch back decades.

The Australian “robo-debt” scandal exposed the fragility and harm associated with these systems

The Australian “robo-debt” scandal exposed the fragility and harm associated with these systems. The automated tool, referred to as the online compliance intervention (OCI) debt recovery system, effectively shifted the onus onto vulnerable welfare recipients to prove that they did not, in fact, owe a debt to the government. A flawed process in calculating debts saw thousands of individuals receiving incorrect debt notices. 

Furthermore, out of date and inaccurate data saw letters being sent to welfare recipients’ old addresses, and these individuals were then penalised for their failure to respond to the correspondence. 

These errors and failures in the system caused significant levels of stress, anxiety and depression among many vulnerable individuals. The Australian government has admitted that aspects of the scheme were unlawful in litigation before the federal court, and is reportedly preparing to settle another ongoing class action on the programme.

Jobseeker Surveillance

We have seen in recent years that companies are increasingly relying on new methods for monitoring and tracking employee productivity as new workplace surveillance tools become available. Similar tools can also be seen in the jobseeker environment, where incentives and programmes are deployed to facilitate re-entry into the job market for those experiencing periods of unemployment. 

For example, in Belgium, the public employment service of Flanders (VDAB) has been accused of building the “Amazon of the labour market.” It has utilised algorithms to assess how people search for jobs on their website and has then used this to provide individuals with recommendations of suitable job opportunities. 

The data collected in this process has also been used to follow up with, and in some instances penalise, individuals who are found not to be seeking job opportunities actively enough.

Help Us Complete the Picture

In this blog, we have tried to sketch a picture of what the “digital welfare state” looks like. 

We realise that this picture is incomplete. If you have suggestions of additions or changes you would make to this visualisation, or if you want to talk to us more generally about your work on the “digital welfare state” or how you can get involved in co-creating a litigation strategy with us, please feel free to reach out to us. We would love to hear from you!

Image by Markus Spiske on Unsplash

Towards a Litigation Strategy on the Digital Welfare State

By Jonathan McCully, 23rd April 2020

Panopticon-like building with something like an eye at the top

Following the DFF strategy meeting in February, we held an in-depth consultation on the “digital welfare state.”

During this consultation, representatives from international human rights organisations, welfare charities, academia, and the digital rights field discussed how we might go about defining the “digital welfare state”.

We surveyed what work is already being done on the issue, and what our shared objectives might be for holding governments to account for digital rights violations in the welfare context.

What is the “Digital Welfare State”?

During the consultation, participants were invited to critique a visual representation of the “digital welfare state,” assembled by DFF following conversations with organisations working at the intersection of digital rights and social protection provision.

Many of the participants noted that key aspects of the “digital welfare state” they were working on were reflected in the visualisation. Nonetheless, a number of pertinent observations were made on how to define this emerging concept.

Some participants noted that the term “welfare,” in itself, is context specific and can be a highly politicised term. Other participants noted that the visualisation implied a process of applying for social protection, when some countries proactively or automatically provide individuals with monetary assistance and other services without an individual having to apply for them. These proactive procedures are often fuelled by the processing of citizens’ data that has already been collected by the state in various other contexts.

…the term “welfare,” in itself, is context specific and can be a highly politicised term

There were a number of aspects identified as missing from the visualisation. For instance, the visualisation could be adapted to include the use of digital and automated decision-making tools in the context of handling disputes and appeals of welfare decisions.

In the UK, for example, the Child Poverty Action Group has published a report entitled “Computer Says ‘No’”, which highlights the problems experienced by claimants trying to dispute or challenge a decision on their Universal Credit reward through online portals. Other participants noted that some services facilitating access to justice, such as free legal advice, could also fall within the definition of the “welfare state.”

Participants also highlighted certain issues that were important to keep in mind when looking at the “digital welfare state.” For instance, migrants, asylum seekers, refugees and stateless persons can face particular difficulties in exercising their rights to social protection and may even be targeted with certain digital tools.

Migrants, asylum seekers, refugees and stateless persons can face particular difficulties in exercising their rights to social protection

Also, access to the internet is not universal, and welfare recipients in many jurisdictions are simply unable to access online portals to manage their welfare provision or challenge decisions made against them.

Furthermore, many of the digital tools being deployed are designed, built and sometimes even run by private entities. These private entities can hide behind trade secrets and intellectual property, evading the level of accountability we would expect from welfare authorities.

There was broad agreement that some digital tools may genuinely improve access to social protection. However, we must always scrutinise the heightened surveillance and data security concerns that accompany such tools. Where does the data used to build these digital tools come from? Has data collected for welfare purposes been processed securely and lawfully? Does it comply with the principles of data minimisation and purpose limitation? These are the key questions we should ask ourselves when we come across digital systems in the welfare context.

We must always scrutinise the heightened surveillance and data security concerns that accompany such tools

Towards a Shared Vision of “Digital Welfare”

Participants working on a range of “digital welfare” issues, from those supporting welfare claimants in navigating the digital interfaces put in place by welfare authorities to those who are advocating for data protection and privacy across a range of government services, discussed what their shared vision was when it came to the “digital welfare state.” A number of goals for this work were identified.

There was convergence around the principle that digital tools used in the welfare context should be human rights respecting “by design,” safeguarding individuals against violations to their rights to privacy, data protection, non-discrimination, and dignity.

Such systems and tools should be inclusive by default, meaning that the starting point should always be that it is accessible to everyone. It should not be a requirement that you be digitally literate or have access to the internet in order to access social protection. Instead, there should always be accessible offline alternatives to the digital tools. Digital tools should not shift the burden of proving eligibility or need onto individuals, and welfare recipients should have full control over the information they share with welfare authorities.

Digital tools should not shift the burden of proving eligibility or need onto individuals

There was also broad agreement around digital tools needing to be transparent and open to review, either by way of freedom of information requests or by making the tools open source.

Where Next?

The conversations we held in February feed into our work in building a litigation strategy that can contribute towards ensuring social welfare policies and practices in the era of new technology respect and protect human rights.

In the coming months, we would like to speak to as many individuals and organisations working on this topic as we possibly can to help us further define the parameters of such a litigation strategy. If you are interested in getting involved, we would welcome your views and input. Get in touch with us!

Image by Antonio Esteo on Pexels

The SyRI Victory: Holding Profiling Practices to Account

By Tijmen Wisman, 23rd April 2020

Messy black and white illustration of surveillance cameras

This article was co-authored by Merel Hendrickx and Tijmen Wisman.

Profiling is a widespread practice in government agencies. It is difficult to reject this sort of practice outright, since much depends on the duties of the agency, the activities they are carrying out, and the safeguards they have put in place. In some circumstances, these practices may even be pursuing legitimate purposes.

Nonetheless, profiling practices are becoming more concerning as both technological potential and data availability expand.

With this comes greater power, and with this concentration of power comes a greater need for clear delimitations and more accountability to protect citizens against arbitrary use of their data.

The judgment in the Dutch SyRI case is one of the first steps towards acquiring this increase in legal protection for citizens in the face of ongoing technological developments.

SyRI, or System Risk Indicator, is a risk profiling method employed by the Dutch government to detect individual risks of welfare, tax and other types of fraud. The law authorising SyRI was passed in the Parliament and Senate in 2014 without a single politician voting against it. This was despite significant objections from the Dutch Data Protection Authority and the Council of State, both of which considered that the purposes for which SyRI could be deployed, and the data that could be used in the system, were likely to expand government powers and maximise executive discretion.

SyRI thus eroded the relationship of trust between citizens and government, because almost all data given to government in a wide variety of relationships could end up in this system and it was impossible for a citizen to find out whether his or her data was actually used.

SyRI thus eroded the relationship of trust between citizens and government

The SyRI system offered immense informational power that could be deployed by the Dutch state against ordinary citizens by bringing databases of different executive bodies together and effectively building dossiers on citizens. Moreover, in practice, SyRI was primarily deployed in poor neighbourhoods. This meant that all the people in these communities were targeted for an invasive search of their personal data through digital means.

The SyRI case was taken jointly by the Platform for the Protection of Civil Rights (Platform Bescherming Burgerrechten), the Public Interest Litigation Project of the Dutch Section of the International Commission of Jurists (PILP-NJCM), and other civil society organisations with a shared interest to set a legal precedent for the protection of citizens against risk profiling and predictive policing practices. Two famous Dutch writers that were critical of SyRI were also asked to join as complainants in the procedure, and the Platform for the Protection of Civil Rights launched a publicity campaign on SyRI and the case.

Due to the publicity campaign, and the appearance of one of the complainants in a popular talk show, the largest Dutch trade union FNV became aware of the case and joined the coalition in July 2018. This created two opportunities. First, we had an extra round to strengthen our arguments in the proceedings, where our lawyers focused their efforts on the GDPR and the relevant provisions on automated decision-making in particular. Second, access to the FNV network gave us the opportunity to be in direct contact with the people who were subjected to the SyRI system and those representing them.

The SyRI system offered immense informational power that could be deployed by the Dutch state against ordinary citizens

The collaboration between the Platform, PILP-NJCM and the FNV around SyRI in Rotterdam turned out to be a great success. The FNV was new to the subject of digital rights, but the union did have a strong network of active union members and volunteers in Rotterdam. With their help (flyer campaigns, a neighbourhood meeting, information leaflets and posters), our knowledge of SyRI could be used effectively to generate critical attention in both the media and local politics.

Eventually, the Mayor of Rotterdam stopped the use of the SyRI system by the local authorities in July 2019. Looking back, the first “battle” in the fight against SyRI was won in Rotterdam, and it definitely set the tone for the future coverage of SyRI in the media.

The involvement of FNV, and the Rotterdam neighbourhood targeted by SyRI, showed how risk profiling instruments are being used against the poorest in society. Philip Alston, the United Nations Special Rapporteur on extreme poverty and human rights (UNSR) became aware of the SyRI case during DFF’s strategy meeting in February 2019. He wrote a very critical amicus curiae to the court, warning that many societies are “stumbling zombielike into a digital welfare dystopia”. The involvement of the UNSR increased public debate on SyRI and the case. It was all over the news.

He warned that many societies are “stumbling zombielike into a digital welfare dystopia”

This was followed by a landslide victory in court. The court held that technological advancements had meant that the right to protection of personal data had also gained in significance, echoing the adage of the European Court of Human Rights that the development of technology makes it essential to formulate clear and detailed data protection rules. The rules governing the deployment of SyRI were anything but detailed, effectively providing a great margin of appreciation to public authorities to browse through the private lives of citizens in a “black box” setting.

The court held that the secrecy of the risk models made it impossible for citizens to “defend themselves against the fact that a risk report has been submitted about him or her”. Even in cases where no risk reports are produced, the court held, citizens should be able to check whether their data were processed on correct grounds. This ability of citizens to defend themselves is one of the hallmarks of the rule of law, which is confirmed in this case and has significant implications for data management within public authorities.

Shortly after the verdict, public authorities in the Netherlands announced that they would critically revise their own fraud systems. The Dutch Employment Agency is reviewing its internal fraud systems in response to the judgment. The Dutch Tax Service ceased the operation of the fraud detection system after facing investigations of the Dutch Data Protection Authority. Furthermore, many municipal councils have started to question the legality of local systems that are comparable to SyRI.

In short, the SyRI case has set a timely precedent in which risk profiling practices are finally held to account

In short, the SyRI case has set a timely precedent in which risk profiling practices are finally held to account. The State Secretary for Social Affairs and Employment, Tamara van Ark, has indicated on the 23rd of April that she has decided not to appeal against the judgment. She did, however, state the intention to further explore the use of risk models within social security.

This underlines the need to keep in mind that these changes do not take place automatically but require continuous and concerted efforts from all of us. In that light, the work of NGOs in these turbid times is of the utmost importance and the support of DFF in funding and providing a network of professionals to cooperate with is indispensable.

In the last few months, the SyRI case has garnered widespread attention, both at national and international level. Thanks to our publicity campaign, we were able to inform the public about the importance of this case and the issues adjudicated upon. No longer is risk profiling a niche subject only raising the eyebrows of a small group of privacy lawyers and techies. In times of a pandemic where even constitutional democracies might consider risk profiling practices as one of the ways out, this broader understanding of the implications of risk profiling is much needed. 

We can conclude that now there is broader public debate on risk profiling, algorithms and using systems like SyRI. Citizens are starting to understand that the way their data is governed is essential for the relationship between them and the state. We consider this one of the biggest victories of this case, because people caring about their relationship with the state is a first step towards improving it.

Tijmen Wisman is Chairman of the Platform for Civil Rights and Assistant Professor of privacy law at the Vrije Universiteit of Amsterdam.

Merel Hendrickx is an in-house human rights Lawyer with the Public Interest Litigation Project of the Dutch Section of the International Commission of Jurists (PILP-NJCM).