The fight for our digital rights to health must be community-led

By Dr. Mwenza Blell, 20th February 2023

Algorithmic tools that use health data promise that they can prioritise care more efficiently and deliver more high-tech, less prone to error, and more genetically-personalised healthcare. Good news stories about new forms of high-tech medicine are often powerfully encouraging to the public, making them happy to believe in tech futures in which diseases and conditions they fear can be easily cured or managed. This positive image hides problems.

The UK, like the European Union, has invested in various ways to increase capacity for health-data driven innovation, these investments impact the rights of marginalised people, including the right to healthcare.

The UK’s National Health Service (NHS) has invested significant funds with the aim of paving the way to greater use of health data-driven technologies, on the argument that this has the power to dramatically improve services at lower costs.

In England, the National Health Service (NHS England) holds personal data about more than 55 million people in electronic health records of one form or another, although there is no single integrated system at present.

The volume of data increases daily because the service deals with over 1 million patients every 36 hours, and increasingly this includes genomic data, which represent all the genetic information from an individual person or organism. Multinational business management consultancy firm Ernst & Young have reported that NHS data represent a “goldmine” valued at GB£9.6 billion annually (over US$12 billion). In 2014 NHS England introduced a controversial scheme, known as “care.data,” for extracting primary care health records data in order to centralise the data for commercial and research uses. These plans to share and sell NHS data were met with a strongly negative public response and ensuing media scandal. Nonetheless, health records data have continued to be sold or given away, although NHS England remains discreet about this. A second attempt to extract primary care data in 2021, nicknamed the GP Data Grab, was met again with outrage and campaigning by civil society which caused plans to be put on hold, seemingly indefinitely. Contemporaneously, NHS data-sharing initiatives with controversial tech companies such as Palantir during the coronavirus pandemic have received negative public attention, leading to legal challenges mounted on behalf of the public in the courts both because of a lack of transparency and serious concerns about ethics.

… The sharing of health-data across state departments and with private companies put marginalised communities in danger

While advocates of health data technology argue that data are fully de-identified before use, there are serious difficulties with de-identification of personal data and a growing list of harms related to uses of even de-identified personal data internationally.

In particular, there is growing evidence of a patterning to data-related harms, such that certain groups (e.g. people living in poverty, transgender people, and ethnic minorities) face particular threats from datafication and data-sharing within administrative, health, welfare, and/or social care systems, including, but not limited to, intensification of discrimination by various means. Risks of this when it comes to health data are extreme, given that this data includes particularly sensitive information, for example about mental illness, as well as sexual and reproductive history, including contraceptive use, history of abortion and/or miscarriage, and HIV status, etc.

Some NHS data-sharing initiatives have put at particular risk more vulnerable people. Inaccurate ongoing NHS data sharing with the Home Office (UK’s immigration authority) for example, under the so-called Hostile Environment policy, has caused suffering and loss of life to ethnic minorities, who have feared that seeking healthcare when they need it or accessing vaccines might cause them to be deported. In one instance, NHS data sharing with the Home Office led to the use of a psychiatrist’s notes about a suicidal patient as a basis to reject the young woman and her family’s asylum claim. This practice sets a dangerous precedent for the use of confidential patient data against vulnerable people.

In 2019, the UK government announced the intention to share patient data with the Department for Work and Pensions (DWP), who allocate sickness, disability, and unemployment benefits. In a separate announcement, the then-Secretary of State for Health and Care, Matt Hancock revealed a new partnership with Amazon to allow Alexa smart-speakers to give out NHS advice, in addition to plans of using smart-speaker data to identify people for mental health intervention, and this, despite clear evidence that smart-speaker data are not kept confidential by Amazon. Moreover, a joint announcement by MI-5 (UK’s Secret Service) and the Metropolitan Police informed the public in 2019 of their intention to establish new data sharing relationships with the NHS as well.

Government rhetoric about sharing data across departments (Home Office, DWP, MI5, Metropolitan Police) and with private companies abundantly demonstrates that, as Kerr et al (2018) point out “[s]urveillance can be positioned as a form of care…but these practices might not be experienced as caring.”

Additionally, recent policy and legislative changes seem to further encroach on data-related rights. For example, in 2020 guidance was issued by the Department of Health and Social Care that stated Police would be granted access to COVID-19 test results so that police could enforce self-isolation of those who tested positive. This move was criticised for its potential to stop people testing for Covid-19 altogether, as people would fear contact with police. In addition, the Police, Crime, Sentencing and Courts Act 2022, infringes the data rights of children and victims of crimes by allowing for so-called digital strip searches, in which police involuntarily access personal data from the digital devices belonging to children and those who report crimes.

Centring lived experiences and perspectives of targeted and disenfranchised communities is the way forward

While such powerful actors in the tech industry and in government persist in their work of innovation, putting our rights at risk, we need to centre attention on lived experiences and perspectives of those especially vulnerable to such harms.

A recent viral tweet by Jesi Taylor Cruz (@moontwerk) instructs us to ask, “Is the community “vulnerable” or targeted? “Poor” or chronically disenfranchised? “Low-income” or are its residents disproportionately impacted by discriminatory hiring practices that impact upward mobility? “At risk” or put in a precarious position due to structural violence?”

Questions like these have been in my mind when exploring various data-related technological innovations in health care.

While some researchers have measured and attempted to sway the opinions of easier to reach populations, I maintain that attempts to centre majorities or easier to reach people miss the people most targeted for harm by technological innovation.

Research I have carried out with my colleague Dr Stephanie Mulrine has shown that concerns about data-sharing are acute for asylum seekers, those experiencing domestic violence, transgender people, offenders, and ex-offenders: those who are vulnerable to the greatest potential harms and discrimination from data-sharing are already those in society who are most marginalised and disadvantaged. Intersections of socially produced vulnerabilities create even more concern for people about their data.

For example, sexual and reproductive health data for lesbian, gay, bisexual, and transgender (LGBT) asylum seekers can be particularly sensitive and sharing beyond the NHS can represent a threat to people’s lives as health data are misused as part of asylum investigations. In some cases, evidence of sexual violence have been used as proof of asylum claimants’ sexuality.

In addition to risks to individuals, there are risks to whole groups that can result from revelation of information from health records. Hate campaigns in social, online, scholarly, and print media targeting transgender people sometimes involve sharing gender-related health information about transgender people who have been convicted of crimes. In this context, trans people convicted of crimes’ health information becomes a tool to malign the whole group. Many groups which are targeted in the way that Jes Taylor Cruz describes face this same kind of collective concern about representation. For example, decades of demonization of immigrants in the UK media make the use of data about immigrant groups’ usage of the health service, even if de-identified, a further threat to their access to life-saving care.

While these concerns are serious, they are not necessarily the most urgent for people who are targeted and chronically disenfranchised, who may face more immediate threats to their survival. It is also the case that access to information about how their data are being used is more constrained for marginalised people, who are more likely to be time-poor, having less knowledge about the particularities of data-related technologies, and less able to investigate and fight back against high-tech harms. This leaves it up to those of us who can work in solidarity to safeguard our collective rights, as well as to offer support to those under most acute threat.

The Hostile Environment is a set of administrative and legislative measures designed to make staying in the United Kingdom as difficult as possible for people without leave to remain (a status equivalent to residency), but which has threatened the safety and security of a far larger group of people, including non-white people born with the right British citizenship.

Dr. Mwenza Blell is an academic researcher and community organiser based in the UK. A biosocial medical anthropologist, she has worked on a range of projects using qualitative and quantitative methods to investigate health and technology in the UK, the Nordic region, Latin America, South Asia, and East Africa. Her research draws from in-depth ethnography to examine intransigent and often invisible structures of injustice. Mwenza is one of three community stewards of Data, Tech, and Black Communities, a Black woman led community of impact working with and for Black communities in the UK to make technology a more positive influence in the lives of Black people.