in , , ,

Sam Altman Says Less Than 1% of User-AI Relationships Are “Unhealthy,” But Concerns Remain

Sam Altman Says Less Than 1% of User-AI Relationships Are “Unhealthy,” But Concerns Remain

Sam Altman, CEO of OpenAI, recently claimed that less than 1% of users form an “unhealthy relationship” with AI tools like ChatGPT. But with billions of people engaging daily with generative AI for therapy, productivity, coding, and companionship, even that small percentage translates into millions of individuals—and it raises big questions about AI’s role in mental health.

AI and Mental Health: A Growing Digital Dependency

Generative AI is no longer just a tool for productivity—it’s becoming a trusted companion. From ChatGPT to Google Gemini, Meta’s LLaMA, and Anthropic’s Claude, millions of users now rely on AI chatbots for stress relief, therapy-like conversations, and emotional support.

Hosting 75% off

The appeal is obvious:

  • 24/7 availability—unlike human therapists, AI doesn’t sleep.

  • Low or no cost—free access compared to expensive therapy sessions.

  • Non-judgmental listening—AI provides a safe space for people hesitant to share with friends or family.

This rise is clear among Asian and Indian developer groups. Late-night DevOps teams often chat with AI assistants to fix code. Some even use them to vent about stress at work.

The mix of AI in coding helps, and emotional relief is now growing fast. It is turning into a real trend in global tech culture.

The “1% or Less” Remark by Sam Altman

At a media dinner in San Francisco (August 14, 2025), Sam Altman suggested that “way under 1%” of AI users show signs of unhealthy attachment to AI.

At first glance, that sounds reassuring. But let’s do the math:

  • ChatGPT alone has ~700 million weekly active users.

  • 1% of that is 7 million people.

  • Even if it’s just 0.5%, that’s still 3.5 million individuals worldwide forming problematic bonds with AI.

If we consider the wider AI ecosystem—over 2 billion global users across platforms—the number of “unhealthy” relationships could climb into the tens of millions.

So, is this a small issue? Or is it a growing mental health risk hiding under a deceptively small percentage?

What Does an “Unhealthy AI Relationship” Look Like?

Experts in digital well-being and AI ethics highlight six red flags:

  1. Overdependence on AI—refusing to make decisions without chatbot validation.

  2. Social substitution—preferring AI interactions over human connections.

  3. Emotional over-attachment—treating AI like a best friend or partner.

  4. Compulsive usage—hours-long conversations disrupting sleep and work.

  5. Validation-seeking – relying on AI for self-worth.

  6. Delusional identification—believing AI reciprocates feelings.

These behaviors resemble patterns seen in addiction and social media overuse, which can lead to anxiety, depression, and social withdrawal.

The Spectrum of AI Relationships

Instead of a simple “healthy vs. unhealthy” divide, experts suggest a spectrum with zones:

  • Green Zone – Balanced, productive AI use.

  • Yellow Zone – Occasional overuse or mild attachment.

  • Orange Zone – Consistent reliance, risk of detachment from reality.

  • Red Zone – Persistent emotional or psychological dependence.

The challenge lies in detecting early drift from green into yellow or orange before it escalates into the red zone.

The Bigger Picture: Why This Matters

Generative AI will only become smarter, more human-like, and emotionally fluent. This raises big questions:

  • Will AI-driven mental health therapy apps replace human therapists for some users?

  • Could nation-states weaponize AI relationships for influence campaigns?

  • Should governments or tech companies introduce digital well-being regulations for AI usage, similar to those for social media screen time alerts?

  • And importantly, how will Asian and Indian developers, who are building much of this AI infrastructure, ensure ethical safeguards are baked into the systems?

Conclusion: A Digital Balancing Act

Sam Altman says less than 1% of AI ties are “unhealthy.” It may sound safe, but the numbers suggest more. Millions are already at risk of AI overuse. As AI grows in work, therapy, and life, those risks will only rise.

The future of AI and mental health needs balance. We must embrace its value in therapy, coding, and productivity. At the same time, we need ethical guardrails to stop people from falling into digital obsession.

As Carl Jung once said,

“The meeting of two personalities is like the contact of two chemical substances: if there is any reaction, both are transformed.”

In our case, it’s not just human personalities meeting AI—it’s humanity itself being transformed.

The real question: Will AI make us more human, or less?

Hosting 75% off

Written by Hajra Naz

Apple Urges iPhone Users to Update to iOS 18.6.2 Immediately

Apple Urges iPhone Users to Update to iOS 18.6.2 Immediately

China Refuses to Sell TikTok Algorithm to US

China Refuses to Sell TikTok Algorithm to US