“This material…has been used to harass people across the internet,” said California Attorney General Rob Bonta in a statement. “I urge xAI to take immediate action to ensure this goes no further.”
The AG’s office will investigate whether and how xAI violated the law.
Several laws exist to protect targets of nonconsensual sexual imagery and child sexual abuse material (CSAM). Last year theTake It Down Actwas signed into a federal law, which criminalizes knowingly distributing nonconsensual intimate images – including deepfakes – and requires platforms like X to remove such content within 48 hours. California also has its ownseries of lawsthat Gov. Gavin Newsom signed in 2024 to crack down on sexually explicit deepfakes.
Grok began fulfilling user requests on X to produce sexualized photos of women and children towards the end of the year. The trend appears to have taken off after certain adult-content creators prompted Grok to generate sexualized imagery of themselves as a form of marketing, which then led to other users issuing similar prompts. In a number of public cases, including well-known figures like “Stranger Things” actress Millie Bobby Brown, Grok responded to prompts asking it to alter real photos of real women by changing clothing, body positioning, or physical features in overtly sexual ways.
According tosome reports, xAI has begun implementing safeguards to address the issue. Grok nowrequires a premium subscriptionbefore responding to certain image-generation requests, and even then the image may not be generated. April Kozen, VP of marketing at Copyleaks, told TechCrunch that Grok may fulfill a request in a more generic or toned-down way. They added that Grok appears more permissive with adult content creators.
“Overall, these behaviors suggest X is experimenting with multiple mechanisms to reduce or control problematic image generation, though inconsistencies remain,” Kozen said.
Neither xAI nor Musk has publicly addressed the problem head on. A few days after the instances began, Musk appeared to make light of the issue by asking Grok to generate animage of himself in a bikini. On January 3,X’s safety accountsaid the company takes “action against illegal content on X, including [CSAM],” without specifically addressing Grok’s apparent lack of safeguards or the creation of sexualized manipulated imagery involving women.
The positioning mirrors what Musk posted today, emphasizing illegality and user behavior.
Musk wrote he was “not aware of any naked underage images generated by Grok. Literally zero.” That statement doesn’t deny the existence of bikini pics or sexualized edits more broadly.
Michael Goodyear, an associate professor at New York Law School and former litigator, told TechCrunch that Musk likely narrowly focused on CSAM because the penalties for creating or distributing synthetic sexualized imagery of children are greater.
“For example, in the United States, the distributor or threatened distributor of CSAM can face up to three years imprisonment under the Take It Down Act, compared to two for nonconsensual adult sexual imagery,” Goodyear said.
He added that the “bigger point” is Musk’s attempt to draw attention to problematic user content.
“Obviously, Grok does not spontaneously generate images. It does so only according to user request,” Musk wrote in his post. “When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state. There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”
Taken together, the post characterizes these incidents as uncommon, attributes them to user requests or adversarial prompting, and presents them as technical issues that can be solved through fixes. It stops short of acknowledging any shortcomings in Grok’s underlying safety design.
“Regulators may consider, with attention to free speech protections, requiring proactive measures by AI developers to prevent such content,” Goodyear said.
TechCrunch has reached out to xAI to ask how many times it caught instances of nonconsensual sexually manipulated images of women and children, what guardrails specifically changed, and whether the company notified regulators of the issue. TechCrunch will update the article if the company responds.
The California AG isn’t the only regulator to try to hold xAI accountable for the issue.Indonesia and Malaysiahave both temporarily blocked access to Grok; India hasdemanded that Xmake immediate technical and procedural changes to Grok; theEuropean Commission orderedxAI to retain all documents related to its Grok chatbot, a precursor to opening a new investigation; and the UK’s online safety watchdogOfcom opened a formal investigationunder the UK’s Online Safety Act.
xAI has come under fire for Grok’s sexualized imagery before. As AG Bonta pointed out in a statement, Grok includes a“spicy mode”to generateexplicit content. In October, an update made it even easier to jailbreak what little safety guidelines there were, resulting in many users creating hardcore pornography with Grok, as well asgraphic and violent sexual images.
Many of the more pornographic images that Grok has produced have been of AI-generated people — something that many might still find ethically dubious but perhaps less harmful to the individuals in the images and videos.
“When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal,” Copyleaks co-founder and CEO Alon Yamin said in a statement emailed to TechCrunch. “From Sora to Grok, we are seeing a rapid rise in AI capabilities for manipulated media. To that end, detection and governance are needed now more than ever to help prevent misuse.”
Source: Techcrunch



