in , ,

China Proposes New Rules to Regulate Human-Like AI Interactions

China Proposes New Rules to Regulate Human-Like AI Interactions

BEIJING: China’s top internet regulator on Saturday released a new set of draft rules to strengthen oversight of AI systems that simulate human personality and allow people to interact with them in an emotionally responsive manner.

The proposal, which has been submitted for public comment, is the most recent effort by Beijing to shape the development of AI systems for personal use while also taking into account rising concerns regarding safety, ethics, and the impact on the mind.

Hosting 75% off

The attempt is in line with the Chinese government’s overall policy of regulating new technologies early, especially those that could have a direct impact on public action, consumption of information, and mental health.

Focus on AI That Mimics Human Personality and Emotion

The draft rules concentrate on AI products and services that are made accessible to the general public in China and are set to mimic human action. More specifically, this pertains to systems with human thinking patterns, characteristics, or behaviors, conversational patterns, and emotional responsiveness.

The rules will extend to AI services and goods that chat with users via text, photographs, audio, video, or other forms of digitally stored data and seek their feelings or provide camaraderie, trust, or personalized interaction.

Furthermore, plastics have been popular, ranging from conversational robots and computer assistants to AI comrades and digital individuals implanted into fun and social software applications.

Read More: Poll Reveals China Trusts AI Far More Than Western Countries

Addressing Psychological Risks and Emotional Dependence

Regulators have expressed particular concern about the effects of AI on emotional systems. They claim that the technology is perfect for long-term usage since it affects users’ emotions in unexpected ways and raises the risk of emotional reliance or addiction, even among idiots.

Under the proposed framework, AI service providers would be required to:

  • Monitor user behavior for signs of excessive use or emotional dependency

  • Assess user emotional states during interactions

  • Identify patterns that suggest addiction or psychological distress

If users exhibit extreme emotional responses or signs of dependency, providers would be expected to intervene by issuing warnings, limiting functionality, or adjusting interaction mechanisms to reduce harm.

Mandatory User Warnings and Usage Safeguards

The draft rules also call for clear user-facing safeguards. Service providers would be required to remind users about the risks of overuse and encourage balanced engagement with AI-powered services.

These warnings would be part of a broader effort to ensure users understand that they are interacting with artificial systems—not real humans—and to prevent AI products from misleading users into forming unrealistic emotional expectations.

Read More: 5 AI Ethics Myths That Could Put Us All at Risk

Lifecycle Accountability for AI Providers

In a significant expansion of responsibility, the proposal places safety obligations on AI providers across the entire product lifecycle—from development and training to deployment and ongoing operation.

Companies offering these services would need to establish formal systems for:

  • Algorithm review and risk assessment

  • Data security management

  • Personal information and privacy protection

  • Continuous monitoring of system behavior and outcomes

This approach aligns with China’s recent regulatory trend of assigning direct accountability to technology providers rather than relying solely on post-hoc enforcement.

Content and Conduct Red Lines Remain Firm

The draft rules reiterate strict content boundaries for AI-generated output. Services must not produce content that:

  • Endangers national security

  • Spreads false information or rumors

  • Promotes violence, extremism, or obscenity

  • Violates public morals or social order

Such limitations are in line with current stipulations under China’s wider internet and AI governance system, cementing the government’s position on generative AI conforming to political, cultural, and legal standards.

Part of a Broader AI Governance Push

The rules, which are open to public input for the next 31 days, are the latest in a series of AI-targeted regulations from Chinese officials over the last two years. Beijing has issued guidelines for generative AI, algorithmic recommendations, deepfakes, and data protection.

Together, the moves send a signal from China that innovation must move forward with AI—but within defined ethical, social, and security constraints. Yonkler, who was not involved in the proposal, says that setting limits on when emotion-detecting tech is used is probably a good move:

“There are a lot of concerns about people using technology to do things with your emotions you wouldn’t want them to.” Agnisyaj Getty Images As emotionally interactive AI continues to both improve and proliferate, regulators seem eager to head off any unintended consequences before they tip into prevalence.

Read More: AI Ethics, Security is the Key Element

Public Consultation Period Underway

The draft rules have been made available for public comment, providing companies, researchers, and other stakeholders with an opportunity to provide feedback before the measures are finalized. Any revisions following the consultation process could further shape how AI products with human-like interaction are developed and deployed in China’s massive consumer market.

Hosting 75% off

Written by Hajra Naz

How to Choose the Right Apple Watch

How to Choose the Right Apple Watch? A Buyer’s Guide

Can AI Relationships Improve Our Lives or Harm Our Minds

Can AI Relationships Improve Our Lives or Harm Our Minds?