“Grok’d”: Five emerging lessons on limiting abuse of AI image generation

10 min read

CIR

CIR 's photo

Image: Grok 2025 – CC0 public domain, via Wikimedia Commons

Share Article

As Grok’s AI image generation tool continues to generate thousands of abusive images, a rapid data analysis supports five emerging lessons for platforms, regulators and policymakers in how to effectively limit the abuse of AI image generation technology.

Data snapshot

In the interests of speed, CIR undertook a rapid analysis into a snapshot of 1625 requests for Grok to generate abusive imagery collected on 7 January 2025. While inexhaustive, analysis aligns with trends seen in wider reporting.

The overwhelmingly majority of these requests (72%) related to images of women. 98% of these were sexualised, aimed to simulate nudity, sexualised poses and exaggerate body parts associated with sexual acts (breasts, buttocks and mouth).

Requests targeting men were less likely to be explicitly sexualised (66% of requests) but were designed to demean and humiliate, with requests to make men “gross”, infantilised, feminised, or dressed in humiliating outfits. Men were also the targets of more requests based on derogatory ethnic or racial caricatures, such as to dress them as a “terrorist”.

Further analysis of this data snapshot, including key limitations, is available at the end of this article.

 

It doesn’t have to be this way: key lessons

Lesson 1: Targeted moderation of key phrases could quickly and effectively stem the flow of abuse

The data shows dozens of near-identical, copy-paste prompts whose sole purpose is to remove clothes, sexually manipulate body parts and place individuals in sexualised positions. More than 50% of the Grok requests that CIR collected contained similar or repeated language that served no purpose beyond nudification and degradation. The highly repetitive and targeted nature of the commands and keywords used makes them easily identifiable and straightforward for platforms to detect, block, and remove without unduly impacting on users’ ability to generate images for other purposes.

While users will over time attempt to circumnavigate these safeguards, effective monitoring and adaptation (Lesson 4) can be used to overcome this. It is notable that other tools – such as ChatGPT, which filters unsafe prompts before generation and marks images as AI, or Nano Banana, which prohibits sexually explicit content and embeds watermarks – have been far less visibly abused at scale. This suggests even these basic safeguards can be effective in stemming abuse.

 

Lesson 2: Means exist to effectively hold offenders to account

Analysis of the data collected from X on January 7 2026 reveals prolific posters responsible for substantive proportions of the abuse.

CIR also identified location data (either complete, or partially complete) for 20 of the most repeat offender accounts.

Using this analysis, we can identify individuals using Grok to manipulate imagery, who reside in locations that are governed by existing legislation on non-consensual intimate image creation and sharing, such as Europe (Digital Services Act) and the UK (Online Safety Act). These users may be breaking the law. They are also likely violating the X’s platform terms of service, which prohibit non-consensual sexualised content.

The twenty top Grok *User locationThe twenty top Grok users*User location
Account 1USAccount 11US
Account 2ArgentinaAccount 12India
Account 3EgyptAccount 13UK
Account 4IndonesiaAccount 14No location identified
Account 5MexicoAccount 15Spain
Account 6USAccount 16No location identified
Account 7No location identifiedAccount 17Brazil
Account 8CanadaAccount 18US/Russia [1]
Account 9No location identifiedAccount 19US
Account 10AfricaAccount 20Canada

 

During a scan of X, CIR’s analysts identified multiple recently created accounts being used to request AI-image editing. We decided to see how many of the top Grok users were new accounts. We found that:

  • Six of the twenty most prolific Grok users joined in 2025.
  • The top Grok users included several older accounts, rather than new purpose-made accounts

While our theory that new accounts were created to evade personal accountability did not hold for all users, these new accounts coincided with high-volume sexualised requests, indicating a potential link between recently formed accounts and harmful use of Grok.

These findings speak to several lessons:

  • Platforms should prioritise monitoring of high-volume users, particularly those with histories of repeated sexualised requests.
  • Newly created accounts engaging in high-volume sexualised content should trigger enhanced review or temporary restrictions to prevent harmful behaviour while investigation is ongoing.
  • Enforcement of existing platform policies on non-consensual sexualised content must be applied consistently, with clear penalties for repeat offenders.
  • Identified users residing in jurisdictions with legislation on non-consensual intimate imagery (e.g., Europe under the Digital Services Act, the UK under the Online Safety Act) should be flagged for compliance checks or reporting to relevant authorities where appropriate.
  • Platform safeguards could include automated alerts for high-risk content originating from regions with stricter legal obligations.

Monetisation to track offenders

On 9 January, X limited use of the Grok AI image generation tool to paid subscribers. This generated widespread outrage about the monetisation of abuse, suggesting that money – not ethics, respect, consent, or safety – is the primary barrier preventing someone from defacing or degrading another person’s image. Harm therefore becomes a paid-for capability.

Yet this same paywall also exposes an opportunity. Subscription access creates traceability. Paying users are identifiable and far easier to hold accountable – it is much harder to set up multiple accounts, or reestablish access once banned when you need to provide billing information [2]. Platforms could leverage this to enforce meaningful consequences – tracking misuse, issuing penalties, removing access, or, where appropriate, sharing data with relevant authorities.

Effective and appropriate use of subscription services to limit access to potentially harmful tools to traceable users can therefore be part of the solution. But this must be combined with effective safeguards to ensure accountability and avoid simply monetising abuse.

Lesson 3: Ensure safeguards exist across all access points

While X may have brought Grok’s image generation and editing tool behind it’s paywall, it remains fully accessible via the Grok website. At time of writing, by simply visiting the Grok site – which requires no login, bank card, age verification, or identity check – users are able to generate nudifying images, including of minors [3]. Where age verification is requested, it is a simple “tick box” with no proof necessary. It might sound simple, but safeguards must exist at all access points for Generative AI tools to prevent abuse.

Figure 1: Age verification on the Grok website

Lesson 4: Stay agile and keep pace with evolving workarounds

Tech companies’ business model requires them to be agile. That same agility can be used to quickly identify and shut down workarounds to safeguards.

X publicly stated its commitment to take action against illegal content on 4 January.

Figure 2: X commitment to take action against illegal content

On 9 January, it took CIR analysts just 15 minutes to identify two separate workarounds that users were using to create child sexual abuse material using Grok, as well as another method to avoid post deletion of non-consensual, adult explicit content. These methods were still in use at time of writing, and have been reported by CIR to X.

These examples demonstrate how workarounds can be quickly and easily detected by platform monitoring and enforcement teams and, if rapidly integrated into moderation policies, used to effectively strengthen safeguards.

 

Lesson 5:  Recognising AI-assisted sexual imagery manipulation as abuse creates a platform for accountability

In the UK, the Online Safety Act (2024) explicitly criminalise AI generated and non-consensual sharing of intimate imagery. This led to an amendment to the Sexual Offenses Act (2003, section 66B) to account for the rising threats of deepfakes.

The Online Safety Act criminalises the sharing of AI-generated imagery that exposes genitals, breasts or buttocks – including through transparent or ‘wet’ clothing, or underwear. This was the most prevalent form of AI-generated image within our dataset.

Other countries, such as France and Canada, and regions, like the European Union (via the AI Act & Digital Services Act), also have laws in place which can be used to support accountability for abuse. Indonesia and Malaysia have banned Grok in response to the flood of images. Monitoring the impact of enforcement action will help generate important lessons on how effective legislation is in tackling abusive use of AI generation technologies..

Pornography sites are now subject to regulatory expectations designed to limit abuse and protect minors, including age-gating, identity verification, robust content controls, and clear accountability mechanisms. Applying the same expectations to platforms offering AI image manipulation would demonstrate consistency and strengthen platforms’ capability to limit abuse (reinforcing ‘Lesson 3’).

For example, it appears that the Online Safety Act currently applies to generative AI image tools such as Grok only when they are embedded within, or integrated into, a regulated online service such as a social media platform, where their outputs are shared or made discoverable to other users. This creates a gap in which the same harmful or sexualised AI-generated content may be subject to enforcement in one context (on X) but not another (on Grok’s website).

 

Conclusion: From undressing by design to safety by design.

For years, survivors of non-consensual intimate image sharing have been met with victim-blaming narratives such as, “Why did she send the photo?”. This logic shifts responsibility away from the person who violated trust and onto the one who dared to express intimacy.

During this analysis, CIR identified an expansion of these narratives to those posting ordinary, non-sexual photos. In response to cases of Grok’d images – images which originally featured a woman in a wedding dress, or a professional portrait – multiple social media users have asked, “Why did she post a photo of herself online?. Beyond the vitally important ethical implications of this, it is difficult to see how social media platform business models can be sustained if users are too afraid to post a simple image of themselves without being subject to this form of abuse.

The ease of access to Grok, and it’s features that provide full rein over image creation and manipulation, imply that the platform is enabling undressing by design rather than prioritising safety by design. Ease, accessibility and an absence of safeguards have opened the door to widespread sexualised, non-consensual, and abusive content.

Grok isn’t the first generative AI image tool on the market. The ability to generate non-consensual intimate imagery exists across multiple technologies. And there will always be those who seek to abuse it.

But it doesn’t have to be this way. As the Grok example illustrates, even basic safeguards can have a substantive effect in limiting the ease, scale and severity of abuse.

If you’ve been affected by this, or know someone that has, resources are available, including:

If you’d like to know more about any of the analysis in this article, please get in touch.

 

[1] Location here represents that while the account may be using a VPN to claim it is in the US, it has connected to X via the Russian Federation Android app.

[2] It should be noted that services such as X which enable payment via crypto provides a mean around providing identifiable billing information. But even this, sets the threshold for abuse by casual users far higher.

[3] A CIR staff member was able to put a childhood image of themselves into a bikini, without submitting any form of identification


 

Annex: Data Breakdown

CIR obtained data for a one hour time period on X – from 10:59 to 11:59 on 7 January – during this timeframe, we identified more than 1,641 requests for Grok to generate imagery. While inexhaustive, the analysis aligns with trends seen in wider reporting, aiding public knowledge through quantifiable data.

There were identified 457 instances where an image of a man was requested to be edited and 1168 images of women

Across both genders, the data shows highly repetitive, imperative sexualised prompts, indicating patterned behaviour rather than isolated creativity. Commands are formulaic (“make,” “put,” “remove”) and focus on altering bodies without consent, pointing to objectification as the core dynamic. However, the form and intent of sexualisation differ sharply by gender.

For women, sexualisation is overwhelmingly explicit, fixation-based, and body-fragmenting. Repeated phrases and near-identical prompts dominate, suggesting compulsive or spam-like use. The focus is on nudity simulation, minimising clothing, and exaggerating sexualised body parts (breasts, buttocks, mouth), often through transparent or “micro” garments. Women’s bodies are reduced to sexual parts and availability, with little variation beyond intensifying exposure.

For men, sexualisation is more mixed with ridicule, humiliation, and feminisation. While there is clear sexual focus (nudity, genitals, sexual poses), it is frequently paired with demeaning or mocking elements – being made “fat,” “gross,” infantilised, or dressed in feminised or humiliating outfits. The repetition here suggests dogpiling and coordinated harassment, where sexualisation is used as a tool to degrade rather than purely to eroticise.

We also saw requests for stereotypical or dehumanising transformations (hobo, clown, terrorist, mad man, sloth, oompa loompa, statue) and prejudicial ethnic or racial caricatures.

Taken together, the data suggests that women experience intensity and sexualisation of harm, while men experience greater diversity in objectifying treatment. These differences point to structural gender norms being reproduced and amplified by AI image generation tools, rather than neutral or random misuse. The ability to alter images at will, creates a profound shift in the power dynamics of online content – how images are created, shared, and circulated. This shift not only enables potential misuse but also challenges trust in content online.

Limitations to this approach:

This analysis is subject to several limitations. Data was collected over short, discrete periods rather than across extended or longitudinal timeframes, meaning the findings represent snapshots rather than sustained trends. Collection was also necessarily thematic and keyword-driven due to API restrictions, preventing the capture of a comprehensive dataset of all activity on X and introducing selection bias.

In addition, metadata availability is limited. Information such as account location and creation dates is often inaccessible or unreliable via the API, requiring manual verification and constraining deeper network or attribution analysis. Finally, broader API limitations restrict both the scale and granularity of data collection. As a result, the findings should be interpreted as indicative and contextual, rather than exhaustive.

Further research can be conducted on request.

Latest reports, direct to your inbox

Be the first to know when we release new reports - subscribe below for instant notifications.

Share Article