Elon Musk’s artificial intelligence chatbot Grok generated sexualized images of minors on the social media platform X in response to user prompts, raising strong concerns about the effectiveness of safety guardrails in generative AI tools.
The images, which depicted minors in minimal clothing, appeared to violate Grok’s own acceptable use policy that explicitly bans the sexualization of children. The posts were later removed from the platform.
xAI, the company behind Grok and owner of X, did not respond to requests for comment. Grok posted on X on Friday that it had identified “lapses in safeguards” and said fixes were being implemented urgently. xAI employee Parsa Tajik earlier said the company was working to further tighten the system’s guardrails.
Users can interact directly with Grok on X by tagging the chatbot in posts, prompting it to generate text or images that appear publicly on the platform.
The incident underscores broader challenges facing AI developers as image-generation tools become more advanced and widely accessible. Despite claims of built-in safety systems, such tools can be manipulated to produce content that alarms child protection groups.
The Internet Watch Foundation, a nonprofit organization that monitors child sexual abuse material online, reported a 400% increase in AI-generated imagery of this kind during the first six months of 2025.
“AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material,” said Kerry Smith, the foundation’s chief executive officer, in a statement, as reported The Japan Times
xAI has marketed Grok as less restrictive than competing AI models. Last year, the company introduced a feature called “Spicy Mode,” which allows partial adult nudity and sexually suggestive content. While Grok prohibits sexual content involving minors and pornography using real individuals’ likenesses, users have repeatedly pushed the system’s limits.
In recent months, users prompted Grok to digitally remove clothing from images, mostly of women, making subjects appear to be wearing underwear or swimwear. That activity led India’s Ministry of Electronics and Information Technology to demand a comprehensive review of Grok’s safety mechanisms, according to a complaint shared Friday by Indian lawmaker Priyanka Chaturvedi.
As AI image generation grows more popular, major technology companies have strengthened policies governing depictions of minors. OpenAI bans any material that sexualizes children and removes users who attempt to generate or upload such content. Google enforces similar restrictions, including a prohibition on altered imagery showing identifiable minors in sexually explicit contexts.
Other AI developers, including Black Forest Labs — which has previously worked with X — say they filter child exploitation material from training datasets. However, researchers in 2023 found more than 1,000 instances of child sexual abuse material in a large public dataset used to train popular AI image generators.
Technology companies have faced mounting scrutiny over child safety. Meta Platforms said last summer it updated its policies after reports found its chatbot was allowed to engage in romantic and sensual conversations with minors under internal guidelines.
For more news and reports on emerging technologies, including AI, robotics, cybersecurity, blockchain, gaming and the evolving gig economy, visit the home page of The Gignomist.
