15 Jan
This week (12th January 2026) the government announced it would finally bring the new deepfake abuse offence into force, one year after it was initially announced – following our successful campaign with survivor Jodie*, campaign group #NotYourPorn, Professor Clare McGlynn and Glamour UK.
We were also pleased to see that Ofcom is investigating social media platform X over whether it has broken laws under the Online Safety Act, after its AI tool Grok was used to make non-consensual sexually explicit deepfakes. Similarly, X announced that it had implemented measures to stop Grok from editing of images of real people. This could set an important precedent for women and girls’ rights online.
Responding to this, Andrea Simon, Director of the End Violence Against Women Coalition (EVAW), said:
“While these features should never have been available for users to abuse, and it remains to be seen how X will implement the ban, this win shows how victims of abuse, campaigners and a show of strength from governments can force tech platforms to take action.
But it can’t stop here – given the evolving nature of AI-generated harms, tech platforms must be required to take proactive preventative action. This includes being held to standards of safety-by-design. Ofcom’s violence against women and girls guidance for tech platforms must be made a mandatory code of practice with consequences for non-compliance, so that platforms aren’t left to regulate themselves. The cost of inaction is too great, with countless women and girls harmed before Grok’s image generation tools were disabled.
We expect the government to do more to ensure tech platforms can’t profit from online abuse. To prevent future harms and guarantee that victims have meaningful routes to redress will mean building on the Online Safety Act to ensure it is fit for purpose.”
ENDS
Recommended ARTICLES
15 Jan
12 Jan
19 Dec
