China’s cyberspace regulator has released draft regulations for public feedback aimed at placing stricter controls on artificial intelligence services that mimic human personalities and form emotional connections with users. The move highlights Beijing’s intent to guide the fast expansion of consumer-oriented AI by reinforcing safety, ethical standards and responsible usage. The proposed framework would cover AI products available to the public in China that display simulated human traits, thinking styles and communication patterns. These services interact emotionally with users through text, images, audio, video or similar formats, raising concerns about psychological impact and long-term dependence.
According to the draft, AI service providers would be required to caution users against excessive use and step in when signs of addiction emerge. Companies must take full safety responsibility across the entire product lifecycle and set up systems for algorithm audits, data security and protection of personal information. The rules also address psychological risks, asking providers to identify user emotional states and levels of reliance. If extreme emotions or addictive behaviour are detected, timely intervention would be mandatory. The proposal further defines strict content boundaries, banning material that threatens national security, spreads rumours, or promotes violence or obscenity.