Musk denies knowledge of images of sexual minors from Grok as California AG launches investigation | TechCrunch

Musk denies knowledge of images of sexual minors from Grok as California AG launches investigation | TechCrunch

5 minutes, 19 seconds Read

Elon Musk said Wednesday he is “unaware of nude images of minors generated by Grok,” hours before the California Attorney General opened an investigation in xAI’s chatbot about the “proliferation of non-consensual sexually explicit material.”

Musk’s denial comes as pressure mounts from governments around the world – from Britain and Europe to Malaysia and Indonesia – after users on real womenand in some cases children, in sexualized images without their consent. Copyleaks, an AI discovery and content management platform, estimated that approximately one image was posted on X every minute. A special one sample collected from January 5 to 6 found 6,700 per hour over a 24-hour period. (X and xAI are part of the same company.)

“This material… has been used to harass people online,” California Attorney General Rob Bonta said in a statement. “I urge xAI to take immediate action to ensure this does not continue.”

The AG’s office will investigate whether and how xAI violated the law.

Several laws exist to protect targets of non-consensual sexual images and child sexual abuse material (CSAM). Last year, the Take It Down Act was signed into federal law, criminalizing the knowing distribution of non-consensual intimate images – including deepfakes – and requiring platforms like X to remove such content within 48 hours. California also has its own land set of laws which Governor Gavin Newsom signed in 2024 to crack down on sexually explicit deepfakes.

Grok began fulfilling user requests on X to create sexualized photos of women and children toward the end of the year. The trend appears to have taken off after certain adult content creators prompted Grok to generate sexualized images of themselves as a form of marketing, which then led to other users providing similar prompts. In a number of public cases, including well-known figures such as “Stranger Things” actress Millie Bobby Brown, Grok responded to requests to alter real photos of real women by altering clothing, posture or physical features in overtly sexual ways.

According to some reportsxAI has started implementing security measures to address the issue. Grok now requires a premium subscription before it can respond to certain image generation requests, and even then the image may not be generated. April Kozen, VP of marketing at Copyleaks, told TechCrunch that Grok can fulfill a request in a more generic or watered-down way. They added that Grok appears to be more tolerant of adult content creators.

Techcrunch event

San Francisco
|
October 13-15, 2026

“Overall, these behaviors suggest that X is experimenting with multiple mechanisms to reduce or control problematic image generation, although inconsistencies remain,” Kozen said.

Neither xAI nor Musk have publicly addressed the issue. A few days after the cases started, Musk appeared to make light of the issue by asking Grok to write a review image of herself in bikini. On January 3 X’s security account said the company is “taking action against illegal content on X, including [CSAM]”, without specifically addressing Grok’s apparent lack of safeguards or the creation of sexualized, manipulated images involving women.

The positioning mirrors what Musk posted today, emphasizing illegality and user behavior.

Musk wrote that he was “not aware of any nude images of minors generated by Grok. Literally zero.” This statement does not deny the existence of bikini photos or sexualized edits in a broader sense.

Michael Goodyear, an associate professor at New York Law School and a former trial attorney, told TechCrunch that Musk likely focused narrowly on CSAM because the penalties for creating or distributing synthetic sexualized images of children are greater.

“For example, in the United States, the distributor or threatened distributor of CSAM can face up to three years in prison under the Take It Down Act, compared to two years for non-consensual sexual images of adults,” Goodyear said.

He added that the “bigger point” is Musk’s attempt to draw attention to problematic user content.

“It is clear that Grok does not spontaneously generate images. He only does so at the user’s request,” Musk wrote in his message. “When asked to generate images, it will refuse to produce anything illegal because Grok’s operating principle is to obey the laws of a certain country or state. There may be times when hostile hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”

Overall, the message characterizes these incidents as unusual, attributes them to user requests or hostile responses, and presents them as technical issues that can be resolved through workarounds. It fails to acknowledge any flaws in Grok’s underlying security design.

“Regulators, while mindful of protecting freedom of expression, may consider requiring proactive measures from AI developers to prevent such content,” Goodyear said.

TechCrunch reached out to xAI to ask how often it has noticed instances of non-consensual sexually manipulated images of women and children, what specific guardrails have changed, and whether the company has alerted regulators to the issue. TechCrunch will update the article if the company responds.

The California AG isn’t the only regulator trying to hold xAI accountable for this issue. Indonesia and Malaysia have both temporarily blocked access to Grok; India has demanded that X make immediate technical and procedural changes to Grok; the European Commission has commissioned xAI will keep all documents related to its Grok chatbot, a precursor to opening a new investigation; and the UK’s online safety watchdog Ofcom has opened a formal investigation under the UK Online Safety Act.

xAI has previously come under fire for Grok’s sexualized images. As AG Bonta noted in a statement, Grok includes a “spicy mode” to generate explicit content. In October, an update made it even easier to jailbreak the little security guidelines that were in place, which led to many users creating hardcore pornography with Grok, as well as graphic and violent sexual images.

Many of the more pornographic images Grok has produced are of AI-generated humans – something that many may still find ethically questionable, but perhaps less damaging to the individuals in the images and videos.

“When AI systems allow the manipulation of images of real people without clear consent, the impact can be immediate and deeply personal,” Alon Yamin, co-founder and CEO of Copyleaks, said in an emailed statement to TechCrunch. “From Sora to Grok, we are seeing a rapid increase in AI capabilities for manipulated media. To that end, detection and governance are needed now more than ever to help prevent abuse.”

#Musk #denies #knowledge #images #sexual #minors #Grok #California #launches #investigation #TechCrunch

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *