Disability and AI: Where Professional Knowledge and Lived Experience Intersect
Disability and AI: Where Professional Knowledge and Lived Experience Intersect
By Maranke Wieringa, 29th November 2021
On 21 November, The Guardian highlighted a conflict over the UK Department for Work and Pensions’ (DWP) algorithm that targeted disabled people for benefit fraud.
The UK government purposefully refrains from disclosing detailed information about the system, so as not the disclose its modus operandi. The declared aim of this lack of transparency is that it is designed to prevent citizens from “gaming” the system.
The position taken by the UK government is reminiscent of that of their Dutch counterpart in the SyRI case. As a disabled scholar working on algorithmic accountability, systems like that of the DWP sit at the intersection of my personal nightmares and professional discomfort.
Automation in the public sector can have many merits. However, as Virginia Eubanks puts it, we need to ensure we are not automating inequality. While systems like fraud risk detection algorithms hold promises of efficiency, and cost-effectiveness, they can also come at societal costs. With government services turning ever more digital, we see a shift in bureaucracies from the street-level to the screen-level to the system-level. Decisions which used to fall within the discretion of, for instance, a social worker, are now being moved into and decided upon by IT systems. This has real-life impacts, specifically for marginalised and vulnerable groups.
As a disabled scholar working on algorithmic accountability, systems like that of the DWP sit at the intersection of my personal nightmares and professional discomfort
To give you some more context about my own lived experience: I am a Dutch, disabled scholar working on algorithmic accountability in local governments. In the Netherlands, where I live, people who became disabled at a young age are entitled to “Wajong” disability benefits once they turn 18. These benefits are administrated by the Employee Insurance Agency (UWV).
Reading about the DWP makes me anxious of their Dutch counterpart, the UWV, implementing something similar. This fear stems from two simple reasons.
First, I know, professionally, about the logics, assumptions that often underlie fraud risk detection algorithms, as well as how they tend to work in practice. Second, I know what kind of data I feed to the UWV. Taken together, the two make a perfect recipe for disaster.
While we do not know how the DWP algorithmic system works precisely, we do know that systems like these look for discrepancies in the data. That is, they look for things that are odd, or conflict with one another. Drawing on my lived experience of entering forms for the UWV for 11+ years now, I know for a fact that they would find a lot of those discrepancies in my files. In fact, I would expect to be flagged as “high risk” by such a system myself. Some things that would likely heighten my ”risk score” would, for instance, be fluctuating income from work, filling forms out the wrong way, and forgetting to update the government about certain things within the allotted time.
Some things that would likely heighten my ”risk score” would, for instance, be fluctuating income from work, filling forms out the wrong way, and forgetting to update the government about certain things within the allotted time
At its core, the problem boils down to the very simple adage: garbage in, garbage out. It comes down to information required by the various forms and to the need of disabled people to submit new information to the government within 1 week of us knowing about it. This includes changes in anything from doing volunteer work, to stays abroad or in hospital, from our own health to our partner’s income. Did you just hear you have a terminal illness? Please submit that new information within one week.
Even as a highly educated person, I have always struggled with these forms. For example, having highly fluctuating income from work, or having income from two employers are not easily accommodated by the forms, and the organisational delays that come from everywhere right now make operating within a one-week deadline extremely difficult. It creates a kind of terror, a perpetual fear of messing up.
Did you just hear you have a terminal illness? Please submit that new information within one week
This is astutely captured by the disabled people fighting the DWP algorithmic system when they highlight, in the Guardian article above, their “fear of the brown envelope”. It is a relatable fear. To this day, my heart starts racing when I see a UWV envelope in my mail. This anxiety became so bad, and the administrative burden so large, that I eventually tried my hardest to stay out of the system as much as possible. I begged the UWV to refrain from sending me benefits in advance, and instead just calculate what I am due in retrospect. It was one less way in which I could mess up.
Even after years of not receiving any benefits, because I am now employed, the terror persists when seeing these envelopes. I am terrified that this time I, somehow, finally managed to break their system and that I will get punished for it. I do not think this fear will ever fully go away.
In pre-digital times, this fear was present, but less acute. When things did not quite fit the form, I could scribble notes in the margins. I could send attachments or an explanatory letter along with the form and it would then be up to the UWV to fit their system around my needs. Which they usually did. This system allowed me to demonstrate, at least, that I tried to do well, that I had no malicious intent.
But with automation came loss. The digital version of the form does not make allowances. It does not permit attachments, it does not allow for paratextual comments, and the commentary space you are given is very limited. Thus, we have little to no way of signaling to the government why we probably filled out something wrong, or why we submitted the form late. We have little to no way of contextualising our data points. Computer says “no”.
We have little to no way of contextualising our data points. Computer says “no”.
A system looking for discrepancies, like these fraud risk detection algorithms, will undoubtedly find many of those discrepancies in my data. My data is about as messy as my life as a working, disabled scholar who used to be precariously employed. Instead of me being able to signal odd things to the UWV in the margins of my forms, and trust in their professional discretion, I now have to make an educated guess at how to compress my messy life to data points which fit into their rigid digitised reality.
It also means that governments increasingly outsource a task that was once delegated to the domain of professional knowledge and discretion of their employees to what are essentially uninitiated outsiders, who try their best to feed the organisational black box in, what they hope and pray, is the correct way. And while this may increase cost efficiency for the government, we increasingly have to ask ourselves – as we must do with regard to all AI systems – if this approach actually serves the needs of the people it is designed to assist.
We increasingly have to ask ourselves – as we must do with regard to all AI systems – if this approach actually serves the needs of the people it is designed to assist
If we allow governments to rely on an opaque algorithmic system when making decisions on how to respond to individuals’ needs, we have all the elements for a perfect storm: rigid, ill-fitting data entry by vulnerable non-professionals, information that is devoid of context and secret algorithmic systems which patrol and punish inevitable discrepancies in the data. It is the repressive welfare state in optima forma. While governments save costs and increase their control, citizens are treated as guilty until proven innocent.
And so, the envelope becomes the harbinger of bad news, as it is often the only, and one-way, feedback system that we have. Many of us will open the envelope with about as much dread and anxiety as Ron Weasley opening his howler. Except he at least knew, precisely, what he did wrong this time.
Maranke Wieringa is a PhD candidate at Utrecht University. They are a disabled media and cultural scholar investigating algorithmic accountability in Dutch municipalities.