Digital Rights are Human Rights
The Digital Freedom Fund counted down to Human Rights Day 2020 with a series of short posts. Each post was written by a guest author and illustrates how the Universal Declaration of Human Rights applies in the digital age. The full series can be viewed here.
The right to a fair trial
UDHR Articles 6-10
You wake up to the police breaking down your door. They arrest you for something you haven’t done – yet.
A police computer system – an algorithm – created by a profit-driven company, sold to a cash-strapped and under-pressure law enforcement agency, programmed using criminal justice data which reflects the daily racism and inequality found in policing and criminal justice, has analysed information on you and your background – and labelled you as at “high risk” of committing a crime in the future.
After your arrest, another police algorithm analyses more data about you and decides that if released, you are again at “risk” of committing a crime, and shouldn’t be released on bail.
You’re held in detention awaiting trial for months while the courts deal with the many other offenders arrested over minor issues because they too have been deemed “risky” by a police algorithm.
When you go to trial, you still don’t fully understand the reasons for your arrest or the evidence against you, hidden as it is within an algorithm-generated profile and the computer system on which it runs, with justice authorities promising that the system is “neutral”, “fair” and “unbiased” – it’s just a computer system, after all.
The case is conducted online, via a video-link. You didn’t have enough time to speak to your lawyer because the connection kept dropping, and you aren’t able to properly communicate with the judge and protest your innocence due to the restrictive online video format. You are not able to appeal or challenge your sentence, because it was based on an algorithm, which cannot be wrong, and anyway, the reasons behind the decision are hidden in the complexities of the system.
This may seem a dystopian daydream, but these technologies and algorithmic tools are increasingly being used by police and in criminal justice systems in Europe and the US. The use of new technologies in policing and criminal justice, both in the process and procedure, has serious implications for fairness, equality and justice.
Predictive and profiling systems completely undermine the presumption of innocence, labelling people and places as “criminal” based on historic information. In doing so, they also re-entrench the existing discrimination and inequality inherent in policing and criminal justice, causing already oppressed and overpoliced communities and groups to be subjected to the same treatment again and again, as these predictions are used to justify arbitrary arrest and other punishment, such as eviction.
Online courts can certainly assist and support justice, but equally, they can be the cause of injustice, preventing marginalised defendants from being properly heard or assessed, and ultimately preventing fair and public hearings.
We must ensure that any new technologies in the criminal justice system actively help to level the playing field and guarantee equality and fairness for all those involved, and do not merely preserve or exacerbate the structural and institutional racism and inequality that undermines justice worldwide.
No-one should be labelled as a criminal or profiled as a “risk” by an algorithm, and criminal justice should only be served by a completely independent, impartial court or tribunal, under a process which is transparent and accountable, and which can be challenged by any individual subject to it. Any new technologies that do not advance or protect these minimum standards, or that undermine them in any way, have no place in a justice system.
By Griff Ferris, Legal and Policy Officer at Fair Trials.