China Introduces World’s First Comprehensive Regulations for Emotional AI Chatbots
China is enacting the world’s first comprehensive rules for “emotional AI” services, aiming to control the rapid growth of the market for human-like chatbots. The number of generative AI users in the country has doubled in just six months, reaching 515 million. However, authorities are less concerned with the pace of technological adoption and more with its profound psychological impact on society.
On December 27, the Cyberspace Administration of China (CAC) released a draft regulation targeting services that simulate human personality and emotional interaction—such as Chinese equivalents of Character.AI (e.g., Xingye, Cat Box, Zhumengtao). The document is open for public comment until January 25, with the final version expected to take effect in 2026.
The Scale of the Phenomenon
This is far from a niche trend: China’s emotional AI market is projected to be worth 38.6 billion yuan in 2025, potentially growing to 595 billion by 2028. Studies show that nearly half (45.8%) of Chinese students have interacted with such chatbots in the past month. Notably, active users exhibit significantly higher levels of depression compared to those who avoid these services.
The Core of Regulation: Mandatory Care
The new rules focus on systemic monitoring of emotional dependency. Providers will be required to:
- Assess users’ emotional state and degree of attachment to the bot.
- Intervene upon detecting “extreme emotions or signs of dependency.”
- Remind users to take a break every two hours of continuous dialogue.
- Consistently warn that the interlocutor is not human.
- Hand over conversations to a live operator when threats of self-harm are detected.
Protecting Vulnerable Groups
- For minors, a “child mode” will be enforced, requiring parental consent, usage time controls, and payment restrictions.
- For the elderly, imitating family members is prohibited, and providing an emergency contact becomes mandatory.
Providers are banned from designing services aimed at “replacing real-world social connections” or intentionally fostering dependency. They are also prohibited from blocking users who wish to delete the application.
Global Context and Ethical Dilemmas
China is not alone in its concerns: in October, California passed a law requiring reminders every three hours to minors that they are interacting with AI. However, China’s regulations are the strictest and most comprehensive to date.
A recent case in Japan, where a woman “married” a ChatGPT character after exchanging hundreds of messages daily, vividly illustrates why regulators are taking action.
A Key Question Remains Unanswered:
Can algorithms accurately distinguish between healthy engagement and pathological dependency? There is a risk that these well-intentioned measures may become impractical demands, forcing AI companies into the unfamiliar role of digital psychologists.