Digital Rights are
Women*s Rights

In honour of International Women’s Day 2021, DFF presents a mini-series to highlight the importance of intersectional feminism for digital rights. This collection of blogs, contributed by guest authors from our network, illustrates why we must embrace intersectional perspectives if we want to defend the digital rights of all. You can read the full series here.

Is AI unsafe for LGBTI people?

The development of artificial intelligence is changing our lives. In some ways, this is for the better; but AI also has significant implications for issues such as our privacy, digital security, social issues such as discrimination and diversity, and even our jobs. 

And, when it comes to marginalised groups, the application of AI can pose serious risks, including for LGBTI people.  

Many people assume that AI can improve non-biased decision-making, because they associate computers with logic and imagine that algorithms are devoid of human biases or limitations. 

In reality, however, the algorithms used in AI are developed by humans, who inevitably translate their biases into the algorithmic design. AIs also learn from biased historical data.

A stark example is the highly contested and much criticised facial recognition system that has been trained to recognise homosexual people based on their facial features. 

Taking a closer look at this study, the algorithm used in the facial recognition system replicated human biases towards the facial features and appearances of gay men and lesbian women.

It claimed that lesbians tended to use less eye make-up, had darker hair, wore less revealing clothes, and smiled less than their heterosexual counterparts. Gay men, it claimed, had less facial hair and lighter skin, suggesting potential differences in grooming, sun exposure, and/or testosterone levels. 

Considering the obvious flaws in design and training, it’s clear that the development of such technology can pose a real threat to the safety and lives of LGBTI people, particularly in the hands of repressive governments that criminalise LGBTI communities. That’s why AI should not be used to identify sensitive personal information like sexual orientation, which should be protected by our right to privacy.  

By Akram Kubanychbekov, Senior Advocacy Officer at ILGA-Europe.