How Artificial Intelligence Impacts Marginalised Groups
How Artificial Intelligence Impacts Marginalised Groups
By Nani Jansen Reventlow, 29th May 2021
In the context of the COVID-19 pandemic it has often been remarked that while we are all facing the same storm, we are not all sitting in the same boat.
This means that the same events can affect different groups of people differently. One area where this is becoming increasingly apparent is the use of AI by different public bodies.
While we often assume the existence of a “standard human” when designing or evaluating new technologies, it is clear that this assumption is based on a partial and exclusive vision of society and its components. It therefore tends to overlook the impact of that technology on marginalised groups.
Three areas where this is particularly obvious include punishment and policing, the provision of essential services and support, and movement and border control.
Punishment and policing
There has been an increased use of AI and algorithmically-driven tools for the policing and punishing of individuals. Well-known examples are the use of facial recognition and predictive policing technologies by law enforcement.
Predictive policing technologies use historical and real time data to predict when and where a crime is most likely to occur or who is most likely to engage in or become a victim of criminal activity.
Many European police forces have been developing and piloting predictive mapping and identification systems to help them pre-emptively intervene and deter crime.
Many of these technologies are based on risk modelling: individuals are identified and ranked according to the likelihood they will engage in criminal activity. In the Netherlands, this has gone as far as developing a system that scores the likelihood of children under the age of 12 to become potential criminals.
Various studies have shown that the data that feeds into predictive policing programmes is historically biased and perpetuates existing overpolicing of marginalised groups.
The historically biased police data that feeds into predictive policing programmes is perpetuating and reinforcing the overpolicing of neighbourhoods housing marginalised groups
Another example of “pre-crime detection” is the Dutch SyRI (System Risk Indication) System that determined whether or not individuals were likely to commit public benefits fraud. The use of this system, which was predominantly rolled out in neighbourhoods with low income and high immigration rates, was struck down by the courts as violating human rights.
However, this has not stopped the Dutch government from publishing a new proposal for an even more invasive system, informally referred to as “Super SyRI”, which would allow for extended data sharing not only between government agencies, but also between those agencies and private companies.
Essential services and support
Automated systems are also increasingly being used to make decisions on whether an individual is entitled to essential services and support. In the private sector, this happens in the form of workplace surveillance, credit checks, private security firms, and mobile apps.
For example, in Finland, the National Non-Discrimination and Equality Tribunal stopped a Swedish credit company from using a statistical method in credit scoring decisions that was not based on the individual’s specific information (like their income, financial situation and payment history), but scored applicants based on factors like their place of residence, gender, age, and native language.
The applicant who brought the case was denied a loan because he was a man, lived in a rural area of Finland and spoke Finnish instead of Swedish. Had he been a Swedish-speaking woman living in an urban area, he would have been eligible.
Access to financial services, such as banking and lending, can be a decisive factor in an individual’s ability to pursue their economic and social well-being
Evidently, access to financial services, such as banking and lending, can be a decisive factor in an individual’s ability to pursue their economic and social well-being. Access to credit will also help marginalised communities exercise their economic, social, and cultural rights. Automation that polices, discriminates, and excludes can therefore threaten other rights like non-discrimination, association, assembly, and expression.
AI technology is also used in the context of social security provision, like benefits or shelter, and influences the extent to which individuals can rely on these services. Automated decisions can push those already in a precarious position even further into precarity by excluding them or shutting them out.
On the one hand, a system malfunction or inaccuracy can result in serious human rights violations, including violations of the right to life. On the other hand, those same violations may equally arise from the system working just as intended.
In 2013, the UK government’s roll out of its new Universal Credit system was designed to simplify access to benefits, cut administrative costs, and “improve financial work incentives” by combining six social security benefits into one monthly lump sum. However, in practice it has resulted in considerable harm to those depending on state benefits.
A flawed algorithm failed to account for the way people with irregular, low-paid jobs were collecting their earnings via multiple paychecks a month. This meant that in some cases, individuals were assumed to receive earning above the relevant monthly earnings threshold, which, in turn, drastically shrank the benefits they received.
By requiring people to request their benefits online, the system also caused hardship amongst those who did not have reliable internet access, who lacked the digital skills to fill out long web forms about their personal and financial circumstances, or who could not meet cumbersome online identity verification requirements. Add to this the fact that applicants had to wait five weeks for their first payment, and it is easy to see that every day “lost” on not being able to complete a complicated online application process adds to existing hardship.
Movement and border control
Finally, automated systems are increasingly being used in immigration contexts where they interfere with the rights of, among others, refugees, migrants and stateless persons.
Border control systems often integrate AI in the form of facial and gait recognition systems, retinal and fingerprint scans, ground sensors, aerial video surveillance drones, biometric databases, and automated asylum decision-making processes
Border control systems often integrate AI in the form of facial and gait recognition systems, retinal and fingerprint scans, ground sensors, aerial video surveillance drones, biometric databases, and automated asylum decision-making processes. As such, they have contributed to the rise of what some have referred to as “digital borders.”
The large-scale reliance on these technologies has reduced human bodies into evidence. A pattern of centralised collection and storage of this biometric and personal data across sectors and agencies is also discernible. This includes public-private partnerships like Palantir with the World Food Program and Microsoft with the International Committee for the Red Cross.
A distinct lack of control over this type of data harvesting leads to a lack of accountability that causes real hardship and often desperation among those under this type of “border surveillance”. This is clear from a number of disturbing reports about refugees burning off their own fingerprints out of fear of being tracked and returned to countries of origin or to entry-point countries in the EU.
Moving beyond “bias” and “fixing” data
In understanding the harms of new technologies, it is therefore key to take into account the broader context and existing inequalities and power structures into which they are being deployed. Machine learning technologies learn from patterns in existing data and therefore make it possible to further entrench and exacerbate already existing systemic human rights harms.
The harmful uses of AI cannot be improved through better data quality, as these uses in themselves exacerbate structural exclusion and inequality
However, this is not the same as the usual framing we see in conversations about discrimination and AI, which generally focus on data quality, data accuracy, or “bias”. The harmful uses of AI just described cannot be improved through better data quality, as these uses in themselves exacerbate structural exclusion and inequality. They therefore need to be restricted, not facilitated with tweaks or improvements.
The harms that AI can cause are multi-faceted and intersectional. They cut across civil, political, economic, social and cultural rights. And when deployed in the wrong context, they not only threaten the right of all individuals to enjoy equal protection of these rights, they can threaten the very rights themselves.
The European Commission recently presented a proposal to regulate the use of AI systems. For a first analysis that takes the above-mentioned issues into account, see EDRi’s “EU’s AI law needs major changes to prevent discrimination and mass surveillance“.