Algorithms. Assess the impact before it hits you

By Krzysztof Izdebski, 16th May 2019

When we talk about the possible impact of artificial intelligence on social life and human rights, we tend to focus on the threats posed by future innovations without doing enough to protect ourselves from the impact of more “traditional” forms of algorithm that are already in place. With the prospect of an increased use of so-called “black boxes” making inscrutable decisions about crucial aspects of our lives, we need to seize the opportunity to build policies on algorithms that will guarantee the rights of citizens and impose responsibilities on public officials before it is too late.

As a number of participants discussed during the Digital Freedom Fund strategy meeting in February 2019, states have generally failed to introduce consistent policies regulating the use of algorithms in the context of relationships between government bodies and citizens. Consequently, we have a “lawyers’ heaven” for conducting strategic litigation to set basic rules and push for legal gaps to be filled.

Having to rely on strategic litigation to protect human rights is, of course, not what we should expect from countries that supposedly uphold the rule of law, as these states have positive obligations that they must meet without us having to resort to the courts. These obligations go beyond the requirement to remedy violations of human rights when they occur. They also require governments to take proactive steps to protect and respect our rights.

Despite these requirements under international law, litigation remains a necessary tool for holding governments to account. Before pursuing litigation to hold governments to account for their obligation to protect and respect human rights when it comes to their use of algorithms, however, it is important to first consider the ideal policies or standards we expect to see them adopt in order to meet these obligations. This piece looks at one potential policy, namely Algorithmic Impact Assessments (AIAs).

Prepare for the unknown

The problem with algorithms in government services does not necessarily lie in their mere existence or use as such. It is a common phenomenon to explore technological development and implement some form of automated decision-making, especially when it can improve quality of life or remove bias, discrimination or other harmful effects from human decision-making. In the vast majority of cases, however, citizens and public officials are not sure how an algorithm may influence the position of an individual.

Without getting into issues concerning transparency and the explicability of algorithms, let’s dig into the idea of AIAs. It is standard practice in democratic states that before adopting a new law, or even when one that is in force is amended, relevant public institutions are obliged to compile a Regulatory Impact Assessment (RIA). This assessment precedes the introduction of new provisions and answers questions relevant to several important fields, including: what the defined problem in the area of planned legislative intervention is; why the chosen means of intervention is the best solution to tackle the problem; how other countries are dealing with similar issues; what impact it is going to have on citizens, entrepreneurs and other specified groups; what impact it will have on public funds; and how the law will be evaluated to check whether it resolves the identified problems or introduces the assumed change.

Confronting the idea before testing it on people

The RIA model allows for in-depth discussion of the government’s plans at a very early stage, and also motivates public officials to seriously reflect on proposed solutions. Furthermore, they are subjected to public consultation. Why would the same model not be applied to the implementation of algorithms created or acquired by governments? The problem could lie in the fact we still treat government algorithms as IT products, rather than a government’s intervention in the rights and freedoms of citizens. As the authors of the AINow Institute wrote in the very first sentence of their Algorithmic Impact Assessments: A Practical Framework For Public Agency Accountability report, public agencies urgently need a practical framework to assess automated decision-making systems and to ensure public accountability.

If AIAs were an obligatory part of implementing any technological intervention in government services, we would face fewer issues around the lack of algorithmic transparency because all relevant parties would be engaged in the process from the very beginning. We would know what the government wants to achieve, how it will measure success, what groups are impacted or what risks can occur, and by which means risks can be prevented or mitigated. It also gives the opportunity to explain how the algorithm will work, what data will be used and what might be the predictive outcome. The AIA, if performed in a transparent and responsible way, should also provide the grounds for challenging or pushing back against the implementation of algorithms when risks are higher than the benefits. It is so much better to prevent problems at this stage than simply testing the tool on human beings, which is the case right now in many jurisdictions.

AIAs & accountability

What we see from the preliminary outcomes from the research in CEE countries, the coordinating authorities (including Prime Ministers) have no idea that algorithms have already been introduced by public institutions. This can be somewhat addressed by AIAs, as the process of carrying them out would mean that all relevant public entities have to publicise that they are adopting algorithms and for what purpose, as well as how the adoption of such systems will respect human rights.

The carrying out of AIAs may also build an institutional culture that can indirectly benefit the work of those conducting strategic litigation on algorithms, as it will be less likely that lawyers will have to convince governments and courts that the issue of automated decision-making goes beyond mere technology and actually engages rights. The lawyers can, instead, put more energy into the merits of their cases. Furthermore, when lack of fairness, accuracy or transparency of automated decision-making is challenged before the courts, the existence of an AIA can be used to compare the initial assumptions behind the deployment of the algorithm with the evidence of its actual impact. Litigation can also be deployed to prove that the methodology behind AIAs was flawed and should be reconsidered. This is yet another example of how litigation conducted in emerging areas can support systemic change for the better. It is worth underlining that to allow this to happen we should ensure that AIAs themselves are transparent and open to the public.

Apart from being inspired by the AINow Institute report, AIAs can be based on the existing methodologies of similar Data Protection or regulatory  documents so that their implementation would not be too demanding for decision-makers. Once introduced, they will prepare the ground for regulating more complex systems based on artificial intelligence to assure that citizens and governments are aware of these algorithms’ potential and actual harmful impact.

About the author: Krzysztof Izdebski is the Policy Director at ePaństwo Foundation, Poland.

Strategising on GDPR complaints to take on the AdTech industry

By Martha Dark, 30th April 2019

The Digital Freedom Fund recently hosted a two-day strategy meeting in Berlin. Attended by 48 lawyers, activists and change makers from across Europe, it was a fantastic opportunity to collaborate, strategise and plan around ongoing and planned litigation. DFF played an important role in bringing this group of people together and in designing an agenda focused on assisting collaborative efforts to bring effective digital rights litigation across Europe.

The meeting was participant led and outcome driven. Attendees had the opportunity to ‘pitch’ their work or idea around digital rights litigation to the other participants and NGOs present. I had the pleasure of attending and didn’t wait long to make the most of this meeting structure.

My colleague at Open Rights Group, Jim Killock, is one of the complainants in ongoing complaints before multiple European Data Protection Authorities. These complaints have been made against Google and others in the AdTech sector, and concern the huge, ongoing data breaches that affect virtually anyone who uses the Internet.

What is the issue? When you visit a website and you are shown a “behavioural” ad on a website, your personal data (this can be what you are reading or watching, your location, your IP address) is broadcast to multiple companies. These websites do this to solicit potential advertisers’ bids for the attention of the specific individual visiting the website – so they share this data widely. This all happens in the moment that it takes for your webpage to load and this is known as a ‘bid request’.

The problem is that this system constitutes a data breach. This broadcast/bid request does not protect an Internet user’s personal data from unauthorized access. Article 5(1) of the General Data Protection Regulation (GDPR) requires personal data be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss.” Some of these companies are failing to protect the data in line with this legal principle/obligation.

So far Jim and his co-complainants have brought complaints before the UK Information Commissioner’s Office (here) and the Irish Data Protection Commission (here) about the practice described above.

Given this issue is far wider reaching than the UK and Ireland, and impacts Internet users on a global scale, the team are looking to partner with organisations that might bring co-ordinated complaints before Data Protection Authorities in certain jurisdictions. The strategy meeting, hosted by DFF, provided the opportunity to discuss the possibility of multiple complaints with lawyers and experts from across Europe.

The Poland-based Panoptykon Foundation has already brought a similar complaint to the Polish Data Protection Authority. Katarzyna Szymielewicz (President of The Panoptykon Foundation) also attended the strategy meeting and we were grateful for the opportunity to hold sessions and side meetings there with the aim of engaging with other potential partners from across Europe who may also be interested in joining our efforts to raise this issue effectively before a number of Data Protection Authorities. 

At the strategy meeting, Katarzyna held a series of fifteen-minute lightning talks on how the real time bidding process works, how it fails to comply with the GDPR and what the most concerning data protection issues are arising from it. Since the strategy meeting we have entered into conversations with multiple NGOs across Europe and we are exploring working together to bring further complaints across the continent.

The DFF meeting was a fantastic opportunity for us to move our work forward effectively in order to get the input of our colleagues and to build stronger litigation with the community.

About the author: Martha Dark is Chief Operating Officer of the UK based NGO Open Rights Group.

Bartering your privacy for essential services: Ireland’s Public Services Card

By Elizabeth Farries, 16th April 2019

In February, DFF invited the Irish Council for Civil Liberties (ICCL) to join 30 European digital rights organisations for a consultation with the UN Special Rapporteur on extreme poverty and human rights, Professor Philip Alston. The Special Rapporteur is exploring the rise of the digital welfare state and its implications for poor and economically marginalised individuals for his upcoming thematic report.

Of the themes that emerged during the consultation, the trend of states mandating welfare recipients to register for biometric identification cards is particularly relevant to the ICCL’s current campaign. In Ireland, the government has been rolling out a Public Services Card (PSC), which includes biometric features and an identity registration system that is shared amongst government databases. The PSC requires individuals to register in order to access social welfare benefits to which they are legally entitled. These individuals, by virtue of the services they are accessing, tend to be poor and economically marginalised. Furthermore, their situation has been made worse by the fact that they are living in a country that has experienced austerity in recent years. Given that the PSC is compulsory, it is the position of the ICCL that the requirement that these individuals trade their personal data and associated civil and human rights (e.g. their right to privacy) in order to access social and economic benefits is unlawful.

The growth of what are broadly referred to as identity systems or national digital identity programmes is a common problem that is not restricted to Europe. The ICCL, as a member of the International Network of Civil Liberties Organizations (INCLO), stands in solidarity with our colleagues at the Kenya Human Rights Commission (KHRC) who have been fighting back against the rollout of the National Integrated Identity Management System (NIIMS). This is a system which, similar to the Irish PSC, has sought to collect and collate personal and biometric data as a condition for receiving government services. The KHRC are challenging the legality of NIIMS in Kenya’s High Court. Last week, the KHRC achieved a legal victory in the form of an interim order which effectively barred the government from making NIIMS mandatory for access to essential services until its legality has been fully evaluated by the court.

While litigation has not commenced in Ireland as of yet, Ireland’s data protection authority, the Data Protection Commission (DPC), identified the PSC as a pressing matter. The office opened a formal audit in October 2017 and have been examining the problem ever since. The ICCL has grave concerns that the DPC have not issued a decision, particularly as the government continues to roll out the PSC and mandate it for essential services. We are further concerned that while the DPC has issued an interim report, the public has not been given access to it. The DPC on Wednesday disclosed to the Justice Committee in parliament that it, in fact, does not intend to release their report. They say that while the current data protection legislative regime authorises them to do so, the previous regime is silent on the matter. The ICCL does not accept this explanation and asserts that such a lengthy and resource intensive investigation by the publicly-funded DPC warrants at the very least the complete disclosure of their findings in the public interest.

Following the DFF consultation, the ICCL and Digital Rights Ireland invited the Special Rapporteur to visit Ireland and consider the problems presented by the PSC. We are pleased that Prof. Alston has accepted our invitation in his academic capacity and will be arriving on 29 June 2019. Please watch this space as we are planning a public event, together with meetings with national human rights organisations, community interest groups and, pending their acceptance, the DPC and responsible government bodies as well.

The Irish Government have promoted the PSC saying that it facilitates ease of service access and reduces the threat of identity fraud. Rights advocates, including the ICCL, argue that the data protection risks attached to such a scheme are not necessary for, or proportionate to, these argued benefits. DFF, through their February consultation, provided an excellent platform for the Special Rapporteur to hear concerns on the PSC. We are now looking forward to continuing this conversation with Prof. Alston directly in Ireland during his upcoming visit.

About the author: Elizabeth Farries is a lawyer and the Surveillance and Human Rights Program Manager for the ICCL and INCLO. She is also an adjunct professor in the School of Law at Trinity College Dublin.