Reporting From The Future

Weak Laws Leave Women Exposed as AI Tools Enable Harder-to-Detect Digital Abuse

The rise of generative AI, experts warn, is accelerating long-standing patterns of misogyny by automating and amplifying harassment, deepfakes, and impersonation in ways that traditional safety systems and existing laws were never designed to handle

Researchers and women’s rights groups warn that a new generation of artificial intelligence tools is accelerating long-standing patterns of gender-based abuse, creating forms of digital harm that move faster, scale wider, and are harder to detect or control. The concern emerges in a world where at least one in three women already experience physical or sexual violence.

Now, powerful AI systems, trained on vast datasets that often encode existing gender biases, are extending those dynamics into digital spaces in ways experts say are increasingly difficult for governments and platforms to police. “It’s a perfect storm,” one recent analysis concludes.

While technology-facilitated abuse has been rising for years, affecting between 16 and 58 percent of women worldwide, depending on the study, researchers say the integration of generative AI has sharply altered the scale and speed of that violence.

A global survey found that 38 percent of women reported personal experiences of online violence, while 85 percent had witnessed abuse of others. Advocates emphasize that the issue is not confined to digital platforms: AI-driven tools are now enabling forms of blackmail, stalking, impersonation, and harassment that frequently spill into women’s personal, professional, and family lives.

Some of the most visible effects appear in the proliferation of deepfake technology. Experts note that many tools used to create synthetic sexual imagery have been built by male-dominated development teams, producing systems that are often not designed to generate images using men’s bodies at all.

The resulting asymmetry has contributed to a rapidly expanding ecosystem of non-consensual content. According to researchers cited in the report, 90 to 95 percent of all online deepfakes are pornographic, and roughly 90 percent of those images depict women. The total number of deepfake videos online in 2023 was 550 percent higher than just four years earlier.

UN Women interviewed Laura Bates, a feminist activist and author of The New Age of Sexism, and Paola Gálvez-Callirgos, an AI and technology-policy expert, to examine these trends. Bates argues that the first step in understanding digital abuse is “to recognise that the online-offline divide is an illusion.”

She points to examples where synthetic images or tracking tools used by abusers can directly affect a woman’s employment, custody rights, or education. “When a domestic abuser uses online tools to track or stalk a victim, when abusive pornographic deepfakes cause a victim to lose her job or access to her children, when online abuse of a young woman results in offline slut-shaming and she drops out of school” – Bates said – “these are just some examples that show how easily and dangerously digital abuse spills into real life.”

Experts say AI is not only amplifying known forms of abuse but enabling entirely new ones. Interactive deepfakes, designed to impersonate real people, can now initiate conversations with women and girls who may not realize they are interacting with bots.

Catfishing schemes, once reliant on static stolen photos, are increasingly conducted by AI systems capable of sustaining fluid, human-like exchanges. Natural language tools are also being used to identify vulnerable posts, generating targeted harassment or doxing campaigns that incorporate victims’ own data and digital footprints.

Deepfake technology, in particular, is emerging as a key driver of online abuse. Bates links this specifically to entrenched misogyny: “In part, this is about the root problem of misogyny – this is an overwhelmingly gendered issue, and what we’re seeing is a digital manifestation of larger offline truth: men target women for gendered violence and abuse.”

At the same time, she notes, the accessibility of the tools plays a role. With little technical skill, users can now create sexually explicit synthetic images that are nearly impossible to track once shared. Deepfake pornography accounts for 98 percent of all deepfake videos online, and 99 percent of those targeted are women.

As digital abuse escalates, researchers emphasize the importance of early interventions. Specialists recommend that victims of deepfake or doctored images seek help from organizations with the technical capacity to remove or block non-consensual content.

Resources such as StopNCII, Chayn Global Directory, the Online Harassment Field Manual, Cybersmile Foundation, and Take It Down offer varying forms of support, from hashing intimate images to global directories of legal and psychological assistance.

Legal protections remain inconsistent. Fewer than half of countries have laws addressing online abuse, and enforcement is often weak. Policymakers say the transnational nature of AI-generated content complicates jurisdiction, while major tech platforms continue to face scrutiny over uneven responses to reports of abuse.

Some governments have begun updating legislation. The UK’s Online Safety Act criminalizes the sharing of manipulated explicit images; the EU’s AI Act requires deepfake creators to disclose their use of synthetic media; Mexico’s Ley Olimpia has inspired similar measures across Latin America; and Australia is moving to strengthen rules around non-consensual explicit content.

Gálvez-Callirgos notes, however, that regulatory strategies must reflect national context. “There isn’t a one-size-fits-all model for AI governance,” she says. But she argues that certain measures, such as criminalizing all forms of technology-facilitated violence and mandating content provenance standards, should be universal.

She recommends legislation requiring AI developers to attach verifiable metadata or watermarking to synthetic media. “This will support automated filtering and make it harder for perpetrators to plausibly deny origin,” she explains.

Gálvez-Callirgos is part of UN Women’s AI School, a program designed to help women’s organizations understand AI systems, influence policy, and apply the technology in efforts to prevent violence. “At the core of successful AI adoption is trust. Innovation that builds trust, is inclusive, and prevents harm is ultimately more sustainable and widely adopted than innovation that undermines those values,” she says.

Researchers also point to the influence of misogynistic online ecosystems, including the “manosphere,” which they say has gained traction through AI-driven recommendation algorithms. Bates argues that these systems magnify harmful content: “There is massive reinforcement between the explosion of AI technology and the toxic extreme misogyny of the manosphere. AI tools allow the spread of manosphere content further, using algorithmic tweaking that prioritizes increasingly extreme content to maximize engagement.”

Studies suggest that two-thirds of young men regularly engage with masculinity influencers, and experts warn of growing links between such content and radicalization.

Prevention efforts, they say, depend on earlier conversations about digital literacy, source skepticism, and the social dynamics shaping online spaces. Recommendations for engaging men and boys include examining fears amplified by online influencers, evaluating the credibility and financial incentives behind viral content, and encouraging male role models—teachers, coaches, relatives—to lead those discussions.

Digital safety advocates also encourage basic protective measures: strengthening passwords, enabling two-factor authentication, adjusting privacy settings, and educating oneself about AI-facilitated abuse. They argue that meaningful change requires pushing for platform accountability and supporting organizations working to address digital violence.

For policymakers concerned that stronger governance could hinder innovation, Gálvez-Callirgos argues the opposite. “The dichotomy of ‘regulation or innovation’ is a falsehood learned from the unregulated evolution of social media over the past decade,” she says.

She contends that AI governance should not be viewed as restricting invention but as shaping the conditions under which technology evolves. In her view, safeguards that prevent harm are essential to ensuring that AI systems serve the broader public interest.

Zeen is a next generation WordPress theme. It’s powerful, beautifully designed and comes with everything you need to engage your visitors and increase conversions.