- Daily & Weekly newsletters
- Buy & download The Bulletin
- Comment on our articles
European Parliament and digital violence: Deepfakes and nudifier apps fuel rise in tech-facilitated abuse
Cyberviolence against women remains a growing concern in the digital age, a topic highlighted during the European Parliament’s International Women’s Day events linked to the Commission's Gender Equality Strategy 2026-2030. Master’s student in Digital Media in Europe at VUB, Magdalena Bissels, attended its discussion on Digital violence against women, listening to policymakers discussing regulatory challenges, social media figures recounting their experiences of online misogyny and experts urging for stronger action against digital violence.
Presented ahead of Women’s Day on 8 March, the Commission’s new gender strategy sets out a raft of goals aimed at combatting gender violence, which disproportionately impacts women and girls.
It recognises that evolving online threats cannot be viewed as individual problems but as a broader democratic challenge. Increasing numbers of women in political decision-making positions report stepping back from public roles due to online harassment, hate speech and threats of violence.

Former Slovak president Zuzana Čaputová (pictured) spoke openly about the verbal abuse she and her family experienced in a video message, including attacks on her appearance, her role as a mother and repeated death threats.
Globally, only 19 countries currently have a woman head of state, according to UN Women, while women hold only 22.9% of cabinet minister positions as of January 2025.
Female journalists are also frequent targets. A global survey found that 73% of women journalists experienced online violence, including harassment, threats of physical or sexual violence, privacy breaches and coordinated disinformation campaigns.
These gender-specific challenges often lead to self-censorship, reduced public participation and declining mental health, ultimately limiting women’s role in shaping democratic debate and undermining press freedom.
Calls for stronger regulation
Yet only one in four women report abusive behaviour online. This is partly due to a lack of awareness of reporting mechanisms on platforms, but also because of fear of stigma, victim-blaming and secondary victimisation from police or other third parties.
Although platform policies and legal frameworks addressing digital gender-based violence exist, experts argue that implementation remains weak. Tackling tech-facilitated gender-based violence requires stronger cooperation between governments, the private sector and civil society.

Greater emphasis should also be placed on preventing the creation and spread of non-consensual intimate images, rather than relying solely on removal processes that are often too slow for the speed of the online environment, said Alejandra Mariscal (pictured), director of the association Point de Contact, a platform for reporting online contact.
Further recommendations included criminalising the full spectrum of digital violence, strengthening regulation of tech platforms and recognising misogyny as hate speech, according to the United Nations.

Green MEP Alexandra Geese also stressed the importance of using precise terminology. Referring to image-based sexual violence as “bikini pictures” minimises the harm and protects perpetrators rather than victims, she said.
Carlos Farinha, president of the Commission for the Protection of Crime Victims in Portugal, emphasised that there was no distinction between online and offline violence.
“Violence is violence,” he said. “Victims need to know that there is no such thing as immunity online.”
How AI is evolving
The rapid development of artificial intelligence is creating new forms of cyberviolence. Experts warn that tech-facilitated gender-based violence, from deepfake pornography to AI “nudifier” apps, is spreading quickly online and disproportionately targets women.
One core issue is the rise of deepfakes. These are videos, images and audio created using artificial intelligence (AI) to realistically simulate or fabricate content, including mimicking a person’s voice or likeness.

The technology has become highly accessible to the general public. Deepfake videos online increased 550% between 2019 and 2023, with 98% of them being non-consensual deepfake pornography. Women are the main targets, accounting for 99% of victims.
Nudifier apps, or generative AI pornification tools, further enable the creation and spread of such material, often amplified through social validation and sharing on social media platforms. This has already led to incidents such as the proliferation of intimate deepfakes on X.
Investigations into AI bot Grok
A January 2026 report by the NGO Center for Countering Digital Hate found that X’s AI bot Grok generated three million sexualised images within 11 days, including 23,000 involving children, feeding into an ecosystem of online misogyny. Users were able to upload a photo and simply ask the bot to remove the person’s clothes, so that they would be depicted in underwear, bikinis, transparent clothing or sexualised poses.
In January 2026, the European Commission launched a new investigation into whether X properly assessed and mitigated risks related to Grok’s functionalities.
Currently, nudifier apps are not explicitly listed among the banned practices under the EU’s AI Act. However, the European Commission is expected to assess whether nudifier apps should be added to the list of prohibited uses. The AI Act already requires the mandatory labelling of AI-generated content.
The digital services act (DSA) also requires very large online platforms to conduct regular risk assessments identifying and mitigating systemic risks. This includes protecting users from gender-based violence and manipulated sexually explicit images, including child sexual abuse material.

Online misogyny and the manosphere
AI-driven sexual content may also pose risks for men. When AI and sexual pleasure become normalised, real-life sexual experiences may become less rewarding. This can contribute to the emotional and physical isolation of young men, making them more vulnerable to incel narratives.
Incels (involuntary celibates) are part of the manosphere, a loose network of anti-feminist and male-supremacist online communities. They share the belief that feminism has created a gynocentric society that disadvantages men. Their ideology often translates into tactics such as doxing, hacking, sextortion, deepfakes and coordinated harassment, frequently using intimate images to shame and control women.
Research shows that more than 20% of posts on incel forums contain misogynist, racist or homophobic language. Sexual violence is a recurring topic, with the word “rape” appearing every 29 minutes across thousands of posts.
Photos: (main image) ©www.freepick.com; ©European Parliament


















