Algorithms. Assess the impact before it hits you

Algorithms. Assess the impact before it hits you

By Krzysztof Izdebski, 16th May 2019

When we talk about the possible impact of artificial intelligence on social life and human rights, we tend to focus on the threats posed by future innovations without doing enough to protect ourselves from the impact of more “traditional” forms of algorithm that are already in place. With the prospect of an increased use of so-called “black boxes” making inscrutable decisions about crucial aspects of our lives, we need to seize the opportunity to build policies on algorithms that will guarantee the rights of citizens and impose responsibilities on public officials before it is too late.

As a number of participants discussed during the Digital Freedom Fund strategy meeting in February 2019, states have generally failed to introduce consistent policies regulating the use of algorithms in the context of relationships between government bodies and citizens. Consequently, we have a “lawyers’ heaven” for conducting strategic litigation to set basic rules and push for legal gaps to be filled.

Having to rely on strategic litigation to protect human rights is, of course, not what we should expect from countries that supposedly uphold the rule of law, as these states have positive obligations that they must meet without us having to resort to the courts. These obligations go beyond the requirement to remedy violations of human rights when they occur. They also require governments to take proactive steps to protect and respect our rights.

Despite these requirements under international law, litigation remains a necessary tool for holding governments to account. Before pursuing litigation to hold governments to account for their obligation to protect and respect human rights when it comes to their use of algorithms, however, it is important to first consider the ideal policies or standards we expect to see them adopt in order to meet these obligations. This piece looks at one potential policy, namely Algorithmic Impact Assessments (AIAs).

Prepare for the unknown

The problem with algorithms in government services does not necessarily lie in their mere existence or use as such. It is a common phenomenon to explore technological development and implement some form of automated decision-making, especially when it can improve quality of life or remove bias, discrimination or other harmful effects from human decision-making. In the vast majority of cases, however, citizens and public officials are not sure how an algorithm may influence the position of an individual.

Without getting into issues concerning transparency and the explicability of algorithms, let’s dig into the idea of AIAs. It is standard practice in democratic states that before adopting a new law, or even when one that is in force is amended, relevant public institutions are obliged to compile a Regulatory Impact Assessment (RIA). This assessment precedes the introduction of new provisions and answers questions relevant to several important fields, including: what the defined problem in the area of planned legislative intervention is; why the chosen means of intervention is the best solution to tackle the problem; how other countries are dealing with similar issues; what impact it is going to have on citizens, entrepreneurs and other specified groups; what impact it will have on public funds; and how the law will be evaluated to check whether it resolves the identified problems or introduces the assumed change.

Confronting the idea before testing it on people

The RIA model allows for in-depth discussion of the government’s plans at a very early stage, and also motivates public officials to seriously reflect on proposed solutions. Furthermore, they are subjected to public consultation. Why would the same model not be applied to the implementation of algorithms created or acquired by governments? The problem could lie in the fact we still treat government algorithms as IT products, rather than a government’s intervention in the rights and freedoms of citizens. As the authors of the AINow Institute wrote in the very first sentence of their Algorithmic Impact Assessments: A Practical Framework For Public Agency Accountability report, public agencies urgently need a practical framework to assess automated decision-making systems and to ensure public accountability.

If AIAs were an obligatory part of implementing any technological intervention in government services, we would face fewer issues around the lack of algorithmic transparency because all relevant parties would be engaged in the process from the very beginning. We would know what the government wants to achieve, how it will measure success, what groups are impacted or what risks can occur, and by which means risks can be prevented or mitigated. It also gives the opportunity to explain how the algorithm will work, what data will be used and what might be the predictive outcome. The AIA, if performed in a transparent and responsible way, should also provide the grounds for challenging or pushing back against the implementation of algorithms when risks are higher than the benefits. It is so much better to prevent problems at this stage than simply testing the tool on human beings, which is the case right now in many jurisdictions.

AIAs & accountability

What we see from the preliminary outcomes from the research in CEE countries, the coordinating authorities (including Prime Ministers) have no idea that algorithms have already been introduced by public institutions. This can be somewhat addressed by AIAs, as the process of carrying them out would mean that all relevant public entities have to publicise that they are adopting algorithms and for what purpose, as well as how the adoption of such systems will respect human rights.

The carrying out of AIAs may also build an institutional culture that can indirectly benefit the work of those conducting strategic litigation on algorithms, as it will be less likely that lawyers will have to convince governments and courts that the issue of automated decision-making goes beyond mere technology and actually engages rights. The lawyers can, instead, put more energy into the merits of their cases. Furthermore, when lack of fairness, accuracy or transparency of automated decision-making is challenged before the courts, the existence of an AIA can be used to compare the initial assumptions behind the deployment of the algorithm with the evidence of its actual impact. Litigation can also be deployed to prove that the methodology behind AIAs was flawed and should be reconsidered. This is yet another example of how litigation conducted in emerging areas can support systemic change for the better. It is worth underlining that to allow this to happen we should ensure that AIAs themselves are transparent and open to the public.

Apart from being inspired by the AINow Institute report, AIAs can be based on the existing methodologies of similar Data Protection or regulatory  documents so that their implementation would not be too demanding for decision-makers. Once introduced, they will prepare the ground for regulating more complex systems based on artificial intelligence to assure that citizens and governments are aware of these algorithms’ potential and actual harmful impact.

About the author: Krzysztof Izdebski is the Policy Director at ePaństwo Foundation, Poland.