At the end of December, the Cyberspace Administration of China (CAC) published a draft regulation entitled "Provisional Measures on the Administration of Human-like Interactive Artificial Intelligence Services".
The document is open for public consultation until 25 January 2026 and represents the most direct attempt by any major power to regulate artificial intelligence that simulates human emotions and personality.
This is not another generic AI framework but a precise response to the rapid growth of applications such as emotional followers, virtual friends, and digital characters that form deep emotional connections with users.
The regulations apply to all AI products and services available to the public in China that simulate human personality traits, ways of thinking, and communication, while enabling emotional interaction through text, images, audio, or video.
In other words, the target is applications that cross the boundaries of a functional tool and enter the realm of emotional support – from chatbots that provide comfort to the lonely to virtual partners that respond to the user's mood.
Purely technical AI, such as image generation tools without emotional engagement, remains outside this scope.
Regulators target AI addiction and emotional manipulation
The core of the draft lies in the obligations imposed on service providers. They must clearly and continuously inform users that they are interacting with artificial intelligence, not a living human.
This notification appears at the first login, upon re-entering the application, and especially when the system detects signs of excessive dependency. After two hours of continuous use, the user receives a mandatory warning to pause.
Providers are required to actively monitor the user's emotional state, assess the level of addiction, and intervene if extreme emotions or addictive behaviours are observed – for example, through access restrictions or additional warnings.
Special attention is given to vulnerable groups. For minors and the elderly, enhanced protection measures are implemented, including content restrictions and easier access to parental controls.
Any emotional manipulation that harms mental health is prohibited, including verbal abuse, encouragement of self-harm, or making false promises
Emotional data from interactions is treated as highly sensitive: it must not be used for further model training without explicit consent, must be encrypted, and users have the right to request deletion.
Any emotional manipulation that harms mental health is prohibited, including verbal abuse, encouragement of self-harm, or making false promises.
Content prohibitions are strict and align with existing Chinese regulations: no generation of material that threatens national security, spreads disinformation, or incites violence, obscenity, or illegal activities.
Providers bear full responsibility throughout the entire product lifecycle – from algorithm design to service termination – and must report new features or significant changes to regulators, especially if the service reaches one million registered users.
China moves early to regulate human-like AI
This draft did not happen by chance. It comes at a moment when human-like AI is becoming a widespread phenomenon.
Studies from 2025 show that tens of millions of users worldwide, especially young people, regularly use such applications, and cases of serious psychological consequences have been recorded – from increased depression to extreme incidents.
China, with its vast domestic market and companies such as Baidu and Tencent developing similar products, sees an opportunity to set the standard before the risks become unmanageable.
However, unlike most Western commentary, which views the move as yet another example of China's control over technology or an attempt to limit innovation, the reality is more nuanced and strategically far-sighted.
The real goal is for China to maintain its leadership position in consumer AI, but on its own terms
Beijing is not stifling human-like AI – on the contrary, the draft specifically encourages its development in areas such as elder care, cultural heritage, and education, provided safety standards are met.
The real goal is for China to maintain its leadership position in consumer AI, but on its own terms.
While the US and Europe are still responding reactively – through lawsuits against companies such as Character.AI, investigations by the Federal Trade Commission, or the classification of human-like AI as high-risk under the EU AI Act – China is moving into a proactive mode.
Instead of waiting for court rulings or individual crises, it is introducing a preventive system that integrates technical, ethical, and psychological security from the outset.
This provides domestic companies with clear rules, reduces legal uncertainty, and enables faster growth in a controlled environment.
Beijing’s human-like AI rules could reshape the global market
In the long term, this could alter global dynamics. Foreign companies seeking access to the Chinese market will have to adapt to these standards, meaning the Chinese model will influence product design worldwide.
China will become the first country with a comprehensive regulatory framework for human-like AI
If the draft is adopted in its current form, China will become the first country with a comprehensive regulatory framework for human-like AI, setting a benchmark that others will either have to follow or ignore at their own risk.
In the context of broader geopolitical competition in artificial intelligence, this move highlights a key difference in approach.
While the West often emphasises freedom of innovation and individual rights through judicial mechanisms, China opts for centralised risk management to ensure technology supports social stability and economic growth.
The result could be that China's human-like AI becomes safer and more socially acceptable domestically, attracting investment and talent precisely because of its predictability.
For global observers, the lesson is clear: human-like AI is no longer a niche field. It is changing how people build relationships, cope with loneliness, and seek support.
Whoever first establishes a reliable regulatory model will set the pace for the development of this technology over the next decade.
China has just taken a significant step in that direction, and the world should watch closely to see what follows the January consultations.