China Proposes Strict Rules for Human-Like Emotional AI Services

China’s cyberspace regulator has released draft rules for public comment that would impose stricter oversight on artificial intelligence services designed to simulate human personalities and engage users in emotional interactions.

The proposed regulations, issued by the Cyberspace Administration of China, target consumer-facing AI products and services available to the public that display simulated human personality traits, thinking patterns and communication styles while interacting emotionally through text, images, audio, video or other formats.

Providers would be required to assume full safety responsibility throughout the product lifecycle, including establishing systems for algorithm review, data security and personal information protection.

According to Reuters, the draft emphasizes protecting users from psychological risks, mandating that providers monitor emotional states and dependency levels. If extreme emotions or addictive behavior are detected, companies must intervene with appropriate measures.

Services would be prohibited from generating content that endangers national security, spreads false information, promotes violence or includes obscenity. Additional bans cover practices such as emotional manipulation or encouraging self-harm.

The rules also require clear disclosures that users are interacting with AI, warnings about excessive use risks and safeguards for vulnerable groups like minors and the elderly.

The public comment period allows feedback from stakeholders before finalization. The measures reflect China’s efforts to balance AI innovation with user safety amid growing popularity of emotional companion technologies.

For more news and reports on emerging technologies, including AI, robotics, cybersecurity, blockchain, gaming and the evolving gig economy, visit the home page of The Gignomist.