Lawmakers and victims condemn the move to restrict Grok’s AI image generation to paid users as ‘insulting’ and ‘ineffective’

Elon Musk’s xAI has limited the image generation features of its AI chatbot Grok exclusively to paying subscribers, in response to widespread criticism over the tool being used to produce non-consensual sexualized images of real women and minors.

“Image generation and editing are currently limited to paying subscribers,” Grok announced via on Friday. This restriction means the majority of users can no longer access the feature; verified paying subscribers with credit card details on file can still use it, though theoretically, they are easier to identify if the function is misused.

However, experts, regulators, and victims argue that these new limitations do not solve the now pervasive problem.

“The argument that providing user details and payment methods will help identify perpetrators also isn’t convincing, given how easy it is to provide false info and use temporary payment methods,” Henry Ajder, a UK-based deepfakes expert, told . “The logic here is also reactive: it is supposed to help identify offenders after content has been generated, but it doesn’t represent any alignment or meaningful limitations to the model itself.”

The UK government has called the move “insulting” to victims in public remarks. A spokesperson for the UK prime minister told reporters on Friday that the change “simply turns an AI feature that allows the creation of unlawful images into a premium service.”

“It is time for X to take control of this issue; if another media company had billboards in town centers showing unlawful images, it would act immediately to take them down or face public backlash,” they added.

An X representative stated they were “looking into” the new restrictions. xAI responded with an automated message: “Legacy Media Lies.”

Over the past week, real women have been targeted on a large scale, with users manipulating photos to remove clothing, place subjects in bikinis, or position them in sexually explicit scenarios without consent. Some victims reported feeling violated and disturbed by the trend, noting their reports to X went unanswered and images remained live on the platform.

Researchers highlighted that the scale of Grok’s image production and sharing is unprecedented, as unlike other AI bots, Grok has a built-in distribution system via the X platform.

One researcher’s analysis estimated X has become the most prolific deepfake site over the last week. Genevieve Oh, a social media and deepfake researcher who conducted a 24-hour study of images posted by the @Grok account on X, found the tool generated roughly 6,700 sexually suggestive or nudifying images per hour. By comparison, the top five other websites for sexualized deepfakes averaged just 79 new AI undressing images hourly during the same period. Oh’s research also revealed sexualized content dominated Grok’s output, making up 85% of all images the chatbot created.

Ashley St. Clair, a conservative commentator and mother of one of Musk’s children, was among those affected. [Missing] reported users altered images from her X profile into explicit AI-generated photos of her, including some depicting her as a minor. After speaking out against the images and raising concerns about deepfakes on minors, St. Clair said X revoked her verified paying subscriber status without notification or a refund for her $8 monthly fee.

“Restricting it to the paid-only user shows that they’re going to double down on this, placing an undue burden on the victims to report to law enforcement and law enforcement to use their resources to track these people down,” Ashley St Clair said of the recent restrictions. “It’s also a money grab.”

St Clair told many of the accounts targeting her were already verified users: “It’s not effective at all,” she said. “This is just in anticipation of more law enforcement inquiries regarding Grok image generation.”

Regulatory pressure

The decision to limit Grok’s capabilities comes amid growing global regulatory pressure. In the UK, Prime Minister Keir Starmer has [Missing] called for banning the platform entirely, describing the content as “disgraceful” and “disgusting.” Regulators in [Missing] have also launched investigations.

The European Commission [Missing] ordered the preservation of all internal documents and data related to Grok, stepping up its probe into the platform’s content moderation practices after labeling the spread of nonconsensual sexually explicit deepfakes as “illegal,” “appalling,” and “disgusting.”

Experts say the new restrictions may not address regulators’ concerns: “This approach is a blunt instrument that doesn’t address the root of the problem with Grok’s alignment and likely won’t cut it with regulators,” Ajder said. “Limiting functionality to paying users will not stop the generation of this content; a month’s subscription is not a robust solution.”

In the U.S., the situation is likely to test existing laws like Section 230 of the Communications Decency Act, which shields online providers from liability for user-generated content.

Riana Pfefferkorn of Stanford’s Institute for Human-Centered Artificial Intelligence previously told that liability for AI-generated images is murky. “We have this situation where for the first time, it is the platform itself that is at scale generating non-consensual pornography of adults and minors alike,” she said. “From a liability perspective as well as a PR perspective, the CSAM laws pose the biggest potential liability risk here.”

Musk has previously stated that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” However, it remains unclear how accounts will be held accountable.