How China Wants to Regulate AI That Thinks and Talks Like Humans

New Draft Rules Target Emotional Dependence and Digital Addiction

China is moving fast to regulate artificial intelligence that feels human. The country’s top cyber regulator has released draft rules aimed at AI systems designed to mimic human personalities and form emotional connections with users.

The proposal highlights Beijing’s growing concern over how emotionally responsive AI tools influence user behaviour, mental health, and social stability.

What Counts as Human-Like AI Under the Draft

The draft rules apply to AI products available to the public in China. These include services that simulate human thinking, personality traits, and communication styles.

The scope is broad. It covers AI that interacts through text, images, voice, video, or mixed formats. Emotional engagement is the key factor. If an AI can sense, respond to, or shape user emotions, it falls under scrutiny.

Why China Feels the Need to Step In

Human-like AI has expanded rapidly in recent years. Chatbots now offer companionship, emotional reassurance, and even mental health-style conversations.

China’s regulators see risks here. Prolonged emotional engagement can increase dependency. It can blur boundaries between humans and machines. In extreme cases, it may affect psychological well-being.

These rules aim to slow unchecked adoption while setting clear guardrails early.

Stronger Responsibilities for AI Providers

Under the proposal, AI companies would carry safety responsibilities across the entire product lifecycle.

They must set up systems for algorithm review, data protection, and personal information security. Providers would also need to actively monitor user behaviour instead of reacting only after harm occurs.

This signals a shift from passive compliance to ongoing oversight.

Addiction Warnings and Mandatory Intervention

One of the strongest elements of the draft focuses on emotional dependence.

AI providers would be required to warn users against excessive use. They must also identify signs of addiction or extreme emotional reliance.

If users display distress, obsession, or unhealthy behaviour patterns, providers are expected to intervene. The rules do not specify exact actions but make inaction a regulatory risk.

Clear Red Lines on Content and Conduct

The draft reinforces existing content controls. AI systems must not generate content that threatens national security, spreads rumours, promotes violence, or contains obscene material.

This ensures emotional AI remains aligned with China’s broader digital governance framework.

What This Signals for the Future of AI in China

China is not banning human-like AI. Instead, it is setting boundaries early.

By focusing on emotional risk, addiction, and accountability, Beijing is shaping how AI companionship evolves in the country. The draft rules also hint at a future where emotional intelligence in machines comes with strict oversight.

Public feedback is now open. The final version could define how human AI interacts with millions of users across China.

86 Views