Generative AI and Gender-Based Violence in Europe: Trends, Risks and Regulatory Gaps

Generative AI and Gender-Based Violence in Europe: Trends, Risks and Regulatory Gaps

By Ekaterina Balueva and Anastasia Karagianni, 25th March 2026

On March 5th 2026, the European Commission presented its new Gender Equality Strategy 2026-2030, acknowledging that AI-driven manipulation and online harassment are contributing to a rise in gender-based violence. The Commission notes the escalating spread of non-consensual intimate images as well as the difficulties in removing such content. Studies have shown that 98% of all deepfakes on the internet are of a pornographic nature, and 99% of these depict women. In response, the EU is increasingly relying on instruments such as the Digital Services Act (DSA) and the Directive on Combating Violence Against Women and Domestic Violence (VAW Directive) to address online harms, including the non-consensual sharing of intimate or manipulated images.

Yet the important question remains whether current legal frameworks are sufficient in ensuring access to justice, prevention, and appropriate protections for victims of AI-enabled gender-based violence. To better understand how the legal framework is shaped around AI-enabled violence against women, we spoke to Anastasia Karagianni, Co-Founder of DATAWO & participant of the DFF’s AI Hub, whose work focuses on combating technology-facilitated gender-based violence and advocating for stronger protections for victims.

How does DATAWO define "AI-enabled gender-based online violence"?

As a researcher, mainly, and not only as co-founder of DATAWO, I view AI-enabled gender-based online violence (AI-GBV) as an extension of gender-based violence that is mediated, amplified, or automated through AI technologies. A clear definition can refer to any act of violence, harassment, discrimination, exploitation, or structural disadvantage that targets single persons (individuals) or groups of people based on their gender (including gender identity or perceived gender), and that is facilitated, intensified, or automated through AI systems or AI-driven digital platforms, resulting in physical, psychological, emotional, social (like reputational), or economic harm.

But to capture the full spectrum of violence in AI and address power asymmetries that result in the amplification of structural inequalities, we should view this definition not only from the survivors/victims’ perspective, but also from the perpetrators. In this sense, AI GBV to me is the use or exploitation of AI to perpetrate, scale, or normalise gender-targeted harm in digital environments, reinforcing existing gender inequalities and producing psychological, social, economic, or physical consequences for victims. Thus, for instance, AI GBV can extend into healthcare (including maternity health) when AI-driven systems reproduce gendered medical biases or undermine the autonomy of pregnant patients. In such cases, algorithmic decision-making or AI-mediated health advice can normalise practices associated with obstetric violence, reinforcing historical patterns of control over women’s bodies while producing psychological, physical, and reproductive harm.

What trends are you seeing in Europe regarding the rise of generative AI-related cases, and what are the most common types of incidents (e.g., deepfakes, impersonation, non-consensual image creation, harassment automation)?

Across Europe, incidents related to generative AI are increasing, particularly as AI tools become embedded in everyday consumer technologies and platform ecosystems. One emerging concern is the integration of AI in wearable devices such as Ray‑Ban AI Meta glasses, which can capture images, audio, and livestream content in public spaces. This raises risks around covert recording, non-consensual image capture, and the potential creation of AI-generated or manipulated content that can be shared instantly online. At the same time, online platforms are struggling to keep pace with the scale of synthetic media, leading to growing pressure on content moderation systems to detect deepfakes, impersonation, and non-consensual imagery, which is good for accountability pressures.  But I mention this because much of this moderation work is carried out by data workers, who label training datasets and review flagged content, often under precarious conditions and with significant exposure to harmful material. As a result, the rise of generative AI is not only producing new forms of online abuse, such as impersonation, synthetic harassment, and non-consensual imagery, but also shifting the burden of detection and mitigation onto platform infrastructures and invisible digital labour.

Which groups of women appear to be the most affected?

All women can be affected by AI GBV because it reflects broader structural inequalities that shape digital spaces. Recent research indicates worrying attitudinal trends among younger generations. A survey by KCL and Ipsos found that a notable share of Generation Z boys believe women should not express opinions publicly or that gender equality has gone “too far”,  illustrating how misogynistic narratives are circulating and normalising online. However, the impacts are not evenly distributed. Women from marginalised communities, such as women of colour, women of low socio-economic status, migrant women, LGBTQ+ people, women with disabilities, and women in precarious economic situations, often face disproportionate harm because gender discrimination intersects with racism, xenophobia, homophobia, and economic inequality. In practice, this means they are more frequently targeted by harassment, non-consensual image creation, impersonation, and coordinated abuse, while also having fewer resources and institutional protections to respond to these harms. 

Are platforms responding effectively to AI-generated abuse reports?

Although I don’t have a specific case to share, platforms appear to be only partially responding to AI-generated abuse reports, and evidence from Europe suggests that these responses remain inconsistent and often insufficient. Big tech companies, like Meta and TikTok, have introduced measures, including synthetic media labelling, AI detection tools, and reporting systems, to flag deepfakes and other manipulated content. These initiatives are also reinforced by EU regulations such as the Digital Services Act (DSA) and AI Act, which require platforms to provide reporting mechanisms and improve transparency around AI-generated content. However, persistent gaps remain. Harmful AI-generated content, including non-consensual deepfakes, can remain online for long periods or spread widely before moderation, and critics argue that platforms prioritise engagement or advertising revenue over effective enforcement. At the same time, transparency databases reveal inconsistencies in how moderation actions are reported and implemented.

How well do current European laws address AI-facilitated violence against women, and what are the biggest gaps in regulation today? 

Current European laws represent an important step forward in addressing AI GBV, but significant regulatory gaps remain. The EU’s first comprehensive framework, Directive (EU) 2024/1385 on combating violence against women and domestic violence, criminalises several forms of online abuse, such as cyberstalking, cyber-harassment, and the non-consensual sharing of intimate images (including deepfakes) and aims to establish a minimum level of protection across Member States. However, from my point of view, the directive only covers specific categories of online violence. It does not comprehensively address emerging harms linked to AI and synthetic media, such as non-consensual sexualised deepfakes, large-scale automated harassment, or platform-driven amplification of misogynistic content. Another major gap is that national implementation varies widely. For example, in Greece, criminalisation is largely limited to “revenge pornography”, leaving other forms of AI-mediated gender-based harm, such as deepfakes, AI-driven harassment, or automated impersonation, outside the scope of the law or partially covered. This creates a patchwork of protection in which survivors’ and victims’ access to justice and remedies depends heavily on national legislation, despite the EU directive’s intention to harmonise protections.

Last but not least, existing frameworks, such as the DSA, focus primarily on platform risk management rather than on survivor- or victim-centred remedies. As a result, while the EU regulatory landscape is evolving, scholars argue that stronger provisions are still needed to address AI-generated harms, ensure faster takedowns, improve cross-border survivor/victim support, and hold platforms accountable for systemic risks linked to gender-based online violence.

AI technologies are developing faster than the policies designed to govern them. Without proactive safeguards, these tools risk amplifying existing inequalities and enabling new forms of gender-based abuse. There is an urgent need for affected people, civil society organisations, and litigators to work together since regulation by itself is evidently not sufficient. We, at DFF, have launched Strategic Litigation Hubs on AI and Digital Democracy to facilitate coordination and knowledge sharing among Hub participants. We are looking for partners to co-design a Hub on LGBTQI+ rights. If you are interested, get in touch

Anastasia Karagianni

Co-founder of DATAWO

Ekaterina Balueva

Communications Officer at Digital Freedom Fund