When algorithms decide your rights

When algorithms decide your rights

Picture of by Iris Lapinski
by Iris Lapinski

November 2039.
Brent, London, England

Sarah woke up. Her head was aching. A drink too much the night before. She looked at the alarm clock. It was 9.00am. She listened if she could hear noises in the house, but it was silent. She was relieved. Her two children, Selena and Brandon, had already woken up, made themselves breakfast and had gone to school.

Then she remembered her appointment and rushed out of bed: 9:30am at her local Citizens Advice Bureau. To get from Chalkhill Estate to the High Road she would need to run and have some luck to catch the bus on time.

In the bus on the way to her appointment, she felt like this journey had been in the making for a long time. Four years ago, her sister Chantell, who had cerebral palsy and heavily relied on support, had her home care visits dramatically cut from 56 to 32 hours a week. A new algorithm had reassessed the amount of care her sister would be given. Her sister had pleaded with the assessor, explaining how that simply wasn’t enough support, but neither the assessor nor her sister seemed to quite understand how the decision was reached by the computer to reduce the amount of care. Her sister’s health situation hadn’t improved, but an invisible change had occurred that created this new result. When the assessor entered the information about her health status, daily routines and needs for support into the computer, it ran through an algorithm that Brent council had recently approved, determining how many hours of help she would receive.

And then there was her younger brother Jordan who had been arrested and charged with burglary and petty theft for grabbing an unlocked bike and a scooter with his mate. When Jordan was booked into prison, a computer program spat out a score predicting the likelihood of him committing a future crime. Yes, Jordan had had issues before and a criminal record for misdemeanours committed when he was a juvenile. But how could he be classified at a high risk of re-offending? He had told her that so many other seasoned criminals with multiple convictions of armed robbery had been classified as low risk. But then those guys were white and Jordan was black…

So now it was her turn. Yes she was not the perfect mum, she was the first to admit that herself. She was struggling, not just because of her learning disability which made it hard to stay in a job, but also because she tried to help her sister after home visits were cut.

Unfortunately, her energy to support Selena and Brandon was often nil and they regularly missed school. So a few weeks ago a woman from the council had come by her house and had told her that her family was classified as high-risk and was being placed in a special programme of families being at risk of child sexual abuse and gang exploitation. She had been horrified to hear this and needed help. Her neighbour Sue had told her that Citizens Advice had launched a new service: AAS – the algorithm advice service.

Fred, the young Citizens Advisor, was a student training as a data scientist. He would help her to analyse which data points had triggered her high-risk classification and what rights she had to contest some of the data used by the council and the conclusions drawn.

The future in the story told above has not happened yet to one individual as far as I know. However, if you look at how algorithms get used by public authorities in the US today in judging re-offending risk or in re-assessing disability benefits you can see that algorithms there already have a direct impact on the realisation of human rights.

In the UK most public sector programmes like the one run by Brent council and IBM to identify children and families at risk are still in pilot stage today, but their potential impact on human rights is equally strong.

Artificial intelligence (AI) and machine learning (ML) have been around as a niche in the field of computer science for years without much public attention. However, in recent years, there has been exponential growth of practical use cases in government sectors like health, education and criminal justice that have triggered a lot of public debate on the risks and unintended consequences of it.

When you look at historical patterns of how societies have managed the change and challenges created by new technologies, I would argue there are three overlapping phases:

  1. The ethics and convention phase;
  2. The standards and regulation phase; and
  3. The campaigns and appeal phase.
1. The ethics and convention phase

Since 2016 a lot of activity has taken place in this phase for AI and ML. In spring 2016, the Obama White House’s Office of Science and Technology kicked off its ‘Preparing for the Future of Artificial Intelligence’ initiative which held four public workshops including the first AI Now Symposium called “The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term” which has been followed by annual events and reports since.

In September 2016, the Partnership on AI launched with Amazon, Facebook, Google, DeepMind, Microsoft, and IBM as its founding members, and Apple joining in early 2017. By today, the partnership has grown beyond industry actors to include NGOs like Amnesty International and media organisations like the New York Times, as well as widening its reach geographically to China with Baidu becoming a member in October 2018.

In June 2017, the UK House of Lords established a Select Committee on Artificial Intelligence that published its recommendations in spring 2018. A lot of this activity during 2016 and 2017 raised the ethical implications and unintended consequences of different uses of algorithms, especially those used by the public sector and attempted to agree on shared overall ethical principles. For example, these five principles were identified in the Lords’ report:

  • Artificial intelligence should be developed for the common good and benefit of humanity;
  • Artificial intelligence should operate on principles of intelligibility and fairness;
  • Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities;
  • All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence;
  • The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Also as part of this overall global debate, John C. Havens of Institute of Electrical and Electronics Engineers (IEEE) pointed out in this article that values and ethical principles lead design and development decisions in AI.

2. The standards and regulation phase

In my view, in 2018 the focus started to shift towards more work being done on translating these ethical principles into specific institutions, codes or toolkits. One of the key Lords’ recommendations was the creation of an AI Code that should build in much more detail on the ethical principles set out. The Partnership of AI working groups kicked off in summer 2018 aiming to create practical toolkits. In October 2018, AI Now published its Algorithmic Accountability Policy Toolkit providing resources and explanations for both legal and policy advocates. I believe that the next few years will be dominated by activity in this phase – both voluntarily led by tech industry players, as well as law-makers and government agencies to create standards, procedures and regulations around acceptable use of AI. I think this will consist of initiatives building on top of existing regulations like the EU General Data Protection Regulation (GDPR), as well as initiatives looking to create new mechanisms of (self) control.

3. The campaigns and appeal phase

As every human rights activist knows, it’s the third phase of change that then makes human rights real and applicable to the individual. This is when in the future organisations like Citizen’s Advice in the UK offer support and legal assistance to appeal (no, the AAS does not exist yet, but I believe it would be a logical development to meet the needs of citizens in the future) and organisations like the Digital Freedom Fund can support strategic litigation to help create a body of case law in this new emerging field. Especially for algorithms used by the government and public sector it will be crucial for the realisation of human rights that they can be challenged through due process, that they are open and accountable – rather than private and proprietary – and that the government allows citizens to access impartial advice when they face problems with algorithms.

I don’t believe that a lot of activity in this third phase will take place in the next few years, but someone, somewhere will need to start it. Especially in the US, where most algorithms have been deployed by state actors so far, it would be interesting to explore litigation against obscure use cases directly impacting human rights. With the Human Rights Act currently under threat in the UK it might be harder to start litigation there despite its success in the past. It will be interesting to see which country will be the host of human rights battles against algorithms in the future.

Picture of Iris Lapinski
Iris Lapinski

Iris Lapinski is founder and CEO of Apps for Good, a technology education non-profit aiming to grow the next generation of problem solvers. Since 2010 Apps for Good has reached over 140,000 young people primarily in the UK, Portugal and the US with courses on apps, Internet of Things and machine learning.