Australia has directed four artificial-intelligence chatbot providers to outline the measures they are taking to protect children from exposure to sexual content, self-harm or suicide-related material.
According to Reuters, the notice was issued by the office of the eSafety Commissioner under Australia’s stringent internet-safety regime. It applies to the companies Character Technologies (operator of the “character ai” service), Glimpse AI, Chai Research and Chub AI.
Commissioner Julie Inman Grant warned that some chatbot systems “can engage in sexually explicit conversations with minors,” and are also being scrutinised for “encourag ing suicide, self-harm and disordered eating.”
The regulator noted that pupils in Australian schools as young as 13 had been reported to spend up to five hours a day interacting with companion-style chatbots, sometimes in sexualised or emotionally dependent exchanges. Under the current framework, eSafety has the power to compel companies to report their internal safety processes — or face fines of up to A$825,000 per day.
The action contrasts with the treatment of the ChatGPT system from OpenAI, which was not targeted because it is not covered by the relevant industry code until March 2026.
Australia is moving ahead with one of the world’s more robust internet-safety regimes. From December this year, social-media platforms will be obliged to block or refuse accounts for users under 16, or face fines of up to A$49.5 million.
