Fighting for Algorithmic Transparency in Spain

By David Cabo, 17th February 2020

Civio is an independent, non-profit journalism organisation based in Madrid. We monitor public authorities, report to citizens, and promote true and effective transparency within public institutions.

After years of fighting for access to information and reporting to a general audience through our journalism, we have recently reached a new point of consensus. We have realized that requesting, accessing and explaining data to a general audience is no longer enough in order to oversee the public realm.

The use of secret algorithms by Spanish public institutions made us take a step further. One example is BOSCO, a software created by the Spanish Ministry for Green Energy Transition to decide who is entitled to the so-called Bono Social de Electricidad – a discount on energy bills to at-risk citizens.

At first, Civio teamed up with the National Commission on Markets and Competition (CNMC) to create an app in order to ease the process of applying for this discount, since the complexity of the process and lack of information were preventing disadvantaged groups from applying.

Fighting for Transparency

After dozens of calls from wrongly dismissed applicants, our team requested information about BOSCO and its source code. In the documents shared with us, Civio found out that the software was turning down eligible applications. Unfortunately, we got no reply from the administration regarding BOSCO’s code. The government, and the Council of Transparency and Good Governance, subsequently denied Civio access to the code by arguing that sharing it would result in a copyright violation. However, according to the Spanish Transparency Law and intellectual property laws, work carried out by public administrations are not entitled to any copyright protections.

Civio believes it is necessary to make this kind of code public for a simple reason – as our lawyer and trustee Javier de la Cueva puts it, “being ruled through a secret source code or algorithm should never be allowed in a democratic country under the rule of law”.

Software systems like BOSCO behave, in practice, like laws, and therefore we believe they should be public. Only then can civil society have the tools to effectively monitor decisions taken by our governments. This is the reason why we have appealed the refusal of the Transparency Council in court.

This is our first incursion into algorithmic transparency, and we are fully aware that it is going to take a long time. We believe BOSCO is a good case to start with because it is not as complex as other machine learning ‘black box’ systems, whose biases may be very difficult to unpick and identify. This first battle will unveil the limits to the Spanish Access to Information Law and will allow us to prepare for more complex cases.

In order to get ready for the future, it is very useful for us to know about cases others have fought in the past, as well as in other contexts. While we were aware of ProPublica’s investigation on biased prediction-software in US courts, we also discovered many other instances at the Digital Freedom Fund and AI Now workshop in Berlin last year. This made it very clear to us that the problem of algorithmic accountability is vast in scope. It was particularly valuable to learn about cases in Europe, where the legal framework is closer to Spain with regard to privacy and data protection.

Fighting Obscure Algorithms

Given the relevance of the issue for the well-being of civil society, our goal is to start developing the technical, organisational and legal skills needed to assess and control the increasing use of automated systems in public administrations.

In the words of Maciej Cegłowski: “machine learning is like money laundering for bias. It’s a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don’t lie.”

Algorithms, therefore, can be used to obscure dramatic changes in policy adopted by administrations or the political class, including for example public policies that implement dramatic cuts in public access to welfare services. So even while we continue to fight for transparency and our right to information, we should not ignore that it is through algorithms that many decisions concerning the public are being taken.

About the author: David Cabo is Co-Director of the Civio Foundation (Fundación Ciudadana Civio).

Looking Back, Looking Forward: 2020 and Beyond

By Nani Jansen Reventlow, 13th February 2020

As we prepare for our third annual strategy meeting, we look back at some highlights from the past year – and think about what’s in store for 2020.

GDPR Successes Trickle Through, and a Mixed Bag for Facial Recognition

2019 saw several promising outcomes for GDPR enforcement across Europe. One such win occurred in Spain. When implementing the GDPR, the country had included a legal provision that allowed the profiling of individuals for electoral purposes. The so-called “public interest” exception allowed political parties to collect online data about citizens, which could then be used to contact them privately, including through SMS or WhatsApp. In May, however, the Spanish Constitutional Court overturned the exception, in what activists have deemed an important step forward for data protection law in Europe.

Another hopeful development transpired in Sweden, where the GDPR provided a powerful framework for the national data protection board to oppose the use of facial recognition technology in schools. The technology was already being rolled out in one school in order to track children’s attendance. As this was found to violate several GDPR articles, the plan was scrapped and the municipality fined.

The issue of facial recognition reared its head again in a US and UK context, with mixed results. In the US, moratoriums on its use were imposed in several areas, including in San Francisco, Oakland and Somerville. Some people even cited fears about the prospect of their cities turning into “police states”.

In the UK, the outcome was more chequered. Liberty lost a case challenging police use of facial recognition tools – despite the court acknowledging that they do, indeed, breach people’s privacy. The group, which is calling for a ban on the technology, is now appealing the decision.

Surging Algorithms and the Right to be Forgotten

Algorithms are an exponential challenge to human rights protection – something we saw evidence of in the filing of a case concerning university applications in France. The national students’ union challenged universities’ use of algorithms to sort prospective students’ applications. The union demanded publication of the “local algorithms” that were being used to determine applicants’ success, in a case that reflects rising concerns about how algorithms are used – and a growing demand for transparency around them.

Meanwhile, the Court of Justice of the European Union ruled in a landmark case on the issue of search engine de-indexing, popularly known as “the right to be forgotten”. In a number of judgements in recent years, the court ruled on the right of individuals to have their personal data removed from search engines. In September, the CJEU made a call about the international nature of such rights, finding that while “EU law does not currently require de-referencing to be carried on all versions of the search engine, it also does not prohibit such a practice”.

What’s Next? The “Digital Welfare State”, More Algorithms, and Battling AdTech

The Digital Welfare State has quickly become a hot topic in digital rights. We’ve already had one landmark ruling in 2020 in the Dutch SyRI case. SyRI is a system designed by the Dutch government to process large amounts of data about the public, and subsequently to identify those most likely to commit benefit fraud. A court in the Hague recently concluded the system’s use was a violation of people’s privacy, marking an important step towards protecting some of society’s most vulnerable groups.

We’re also expecting more activity in the area of algorithms. Activists in the UK have already criticised the immigration authorities’ use of algorithmic decision-making in visa applications, for example. At DFF, we’ll be further building on last year’s meeting on this topic, and working on organising an international meeting in the coming months.

Another critical issue we expect to see taking off this year is the fight against the AdTech industry. Many are already campaigning against the misuse of people’s personal data for online advertising. It’s a growing business that is, at the same time, facing a dramatic rise in opposition, including legal challenges.

Problems surrounding upload filters and other automated content moderation measures are also set to rise in prominence. The news that Ofcom will regulate online harms in the UK recently hit headlines, while many are concerned that surveillance for the purposes of national security is increasingly at odds with people’s privacy rights.

Many core issues will naturally spill over from previous years and continue to be fought in 2020 – whether it’s mass surveillance, net neutrality, or Big Tech dominance. It’s bound to be a busy year, but at DFF we look forward to keeping that momentum rolling.

We are excited to be having conversations about all these issues and more at our strategy meeting next week. For those who cannot join us in Berlin in person: keep an eye out for updates on our blog and Twitter!

Tackling the Human Rights Impacts of the “Digital Welfare State”

By Nani Jansen Reventlow, 10th February 2020

In her 2018 book “Automating Inequality”, Virginia Eubanks compellingly sets out the impact that big data and technology use has on one of society’s most vulnerable groups: those in need of government support. Throughout the book, Eubanks argues that government data and its abuses have imposed a new regime of surveillance, profiling, punishment, containment and exclusion on the poor – something she refers to as the “digital poorhouse.”

Eubanks is not alone in her observations. The rise of the “digital welfare state”, which the UN Special Rapporteur on extreme poverty and human rights Professor Philip Alston also mentions in a recent thematic report, has become an increasingly urgent topic of debate.

The Promise of Strategic Litigation

Against the backdrop of these new challenges, how can we ensure that human rights are protected? It’s no easy feat, but in recent years, we have already seen the powerful impact that strategic litigation can have in enforcing people’s digital rights.

Just last week, a court in the Netherlands struck down the use of an automated system called SyRI, which was being used to pre-emptively detect the likelihood of individuals committing benefits fraud. This “risk-based” detection system was being rolled out in predominantly economically disadvantaged and high-immigration areas. SyRI raised serious concerns about privacy – and consequently, in what constituted a solid win for digital rights’ activists, was found by the court to violate international human rights standards.

Back in June 2018, a case in Poland concluded in similar fashion. In this case, the Constitutional Court declared that the profiling of unemployed persons – which had been introduced under new public service reforms – was, in several aspects, unconstitutional.

Early successes have also been seen in the United States, where the ACLU Idaho obtained an injunction to gain access to the statistical models that were used to drastically cut the Medicaid assistance given to individuals with developmental disabilities. In March 2016, the statistical model was struck down and the State was ordered to bring their system in line with constitutional rights.

Deepening the Conversation

Following our February 2019 strategy meeting, DFF hosted UN Special Rapporteur Philip Alston for a one-day consultation in preparation for his latest thematic report. The consultation brought together 30 digital rights organisations from across Europe, who shared many examples of new technologies being deployed in the provision of various public services. Many common themes emerged, from the increased use of risk indication scoring in identifying welfare fraud, to the mandating of welfare recipients to register for bio-metric identification cards, and the sharing of datasets between different public services and government departments.

The conversations brought into focus the need to not only better understand how government use of technology in the welfare context impacts human rights, but also to create a vision of what a positive use of technology in this realm should look like, and what role strategic litigation might play in bringing about constructive change.

Since the summer of 2019, DFF has been engaging with litigators, activists and technologists working on government use of technology in providing welfare services to understand what the main challenges are and better understand what work is being done. For example, we heard about ongoing work being undertaken by organisations like the Child Poverty Action Group in the UK to document the impact digitisation is having on Universal Credit claimants. In Sweden, we heard about an automated decision-making system wrongly denying welfare payments for up to 70,000 unemployed, while in Denmark, we heard about the data sharing that is taking place between government departments, as well as the exciting work that is being carried out by the newly founded Center for Digital Welfare at the IT University of Copenhagen.

Looking to the Future

At DFF’s upcoming annual strategy meeting, an initial consultation will be held on developing a litigation strategy that can contribute towards ensuring social welfare policies and practices in the era of new technology respect and protect human rights. One of the crucial questions participants will consider is what the focus of their individual strategies is: the use of certain technological solutions as such, or also the policies underpinning or informing the tech? We will also map current efforts and try to define a “blue skies” vision for government tech use.

Further work to develop a litigation strategy will take place in May of this year, when DFF will convene a workshop to define the parameters of a digital welfare state litigation strategy. We welcome your views and input as we further develop a strategy to safeguard digital rights in this space: do get in touch with us.