On 9 January, it took CIR analysts just 15 minutes to identify two separate workarounds that users were using to create child sexual abuse material using Grok, as well as another method to avoid post deletion of non-consensual, adult explicit content. These methods were still in use at time of writing, and have been reported by CIR to X.
These examples demonstrate how workarounds can be quickly and easily detected by platform monitoring and enforcement teams and, if rapidly integrated into moderation policies, used to effectively strengthen safeguards.
Lesson 5: Recognising AI-assisted sexual imagery manipulation as abuse creates a platform for accountability
In the UK, the Online Safety Act (2024) explicitly criminalise AI generated and non-consensual sharing of intimate imagery. This led to an amendment to the Sexual Offenses Act (2003, section 66B) to account for the rising threats of deepfakes.
The Online Safety Act criminalises the sharing of AI-generated imagery that exposes genitals, breasts or buttocks – including through transparent or ‘wet’ clothing, or underwear. This was the most prevalent form of AI-generated image within our dataset.
Other countries, such as France and Canada, and regions, like the European Union (via the AI Act & Digital Services Act), also have laws in place which can be used to support accountability for abuse. Indonesia and Malaysia have banned Grok in response to the flood of images. Monitoring the impact of enforcement action will help generate important lessons on how effective legislation is in tackling abusive use of AI generation technologies..
Pornography sites are now subject to regulatory expectations designed to limit abuse and protect minors, including age-gating, identity verification, robust content controls, and clear accountability mechanisms. Applying the same expectations to platforms offering AI image manipulation would demonstrate consistency and strengthen platforms’ capability to limit abuse (reinforcing ‘Lesson 3’).
For example, it appears that the Online Safety Act currently applies to generative AI image tools such as Grok only when they are embedded within, or integrated into, a regulated online service such as a social media platform, where their outputs are shared or made discoverable to other users. This creates a gap in which the same harmful or sexualised AI-generated content may be subject to enforcement in one context (on X) but not another (on Grok’s website).
Conclusion: From undressing by design to safety by design.
For years, survivors of non-consensual intimate image sharing have been met with victim-blaming narratives such as, “Why did she send the photo?”. This logic shifts responsibility away from the person who violated trust and onto the one who dared to express intimacy.
During this analysis, CIR identified an expansion of these narratives to those posting ordinary, non-sexual photos. In response to cases of Grok’d images – images which originally featured a woman in a wedding dress, or a professional portrait – multiple social media users have asked, “Why did she post a photo of herself online?. Beyond the vitally important ethical implications of this, it is difficult to see how social media platform business models can be sustained if users are too afraid to post a simple image of themselves without being subject to this form of abuse.
The ease of access to Grok, and it’s features that provide full rein over image creation and manipulation, imply that the platform is enabling undressing by design rather than prioritising safety by design. Ease, accessibility and an absence of safeguards have opened the door to widespread sexualised, non-consensual, and abusive content.
Grok isn’t the first generative AI image tool on the market. The ability to generate non-consensual intimate imagery exists across multiple technologies. And there will always be those who seek to abuse it.
But it doesn’t have to be this way. As the Grok example illustrates, even basic safeguards can have a substantive effect in limiting the ease, scale and severity of abuse.
If you’ve been affected by this, or know someone that has, resources are available, including:
If you’d like to know more about any of the analysis in this article, please get in touch.
[1] Location here represents that while the account may be using a VPN to claim it is in the US, it has connected to X via the Russian Federation Android app.
[2] It should be noted that services such as X which enable payment via crypto provides a mean around providing identifiable billing information. But even this, sets the threshold for abuse by casual users far higher.
[3] A CIR staff member was able to put a childhood image of themselves into a bikini, without submitting any form of identification
Annex: Data Breakdown
CIR obtained data for a one hour time period on X – from 10:59 to 11:59 on 7 January – during this timeframe, we identified more than 1,641 requests for Grok to generate imagery. While inexhaustive, the analysis aligns with trends seen in wider reporting, aiding public knowledge through quantifiable data.
There were identified 457 instances where an image of a man was requested to be edited and 1168 images of women