in , ,

OpenAI Reveals Over 1 Million People Discuss Suicide With ChatGPT Each Week

OpenAI Reveals Over 1 Million People Discuss Suicide With ChatGPT Each Week

OpenAI has revealed new data showing that more than a million people each week talk to ChatGPT about suicide or self-harm. The company shared the numbers on Monday as part of a report on how its AI tools interact with users facing mental health challenges.

According to OpenAI, around 0.15% of ChatGPT’s 800 million weekly users have conversations that include signs of suicidal thinking or planning. Another small percentage of users show signs of strong emotional attachment to the chatbot, while some display symptoms that may resemble psychosis or mania.

Hosting 75% off

Although these cases are rare, OpenAI says they still affect hundreds of thousands of people weekly, a number too significant to ignore.

ChatGPT’s Growing Role in Mental Health Conversations

The new report comes at a time when more people are turning to AI for emotional support. OpenAI said it worked with over 170 mental health experts to improve how ChatGPT responds to people in distress. These experts helped test and refine the model’s tone, empathy, and response quality.

According to the company, the latest version of GPT-5 now responds with 65% more “desirable” replies to mental health-related questions than earlier versions. It’s also 91% compliant with OpenAI’s internal safety standards when handling suicide-related conversations, up from 77% previously.

This improvement is part of a larger push to make ChatGPT safer, especially during long and emotionally intense chats, where earlier models were more likely to drift into unsafe or inaccurate territory.

Read More: OpenAI Integrates ChatGPT into Everyday Digital Tools

A Rising Concern: AI and Emotional Dependence

Mental health professionals have long warned about users forming emotional bonds with chatbots, especially when feeling lonely or isolated. Some users start to treat ChatGPT like a trusted friend, which can blur the lines between emotional comfort and unhealthy dependence.

OpenAI admits that this is a growing concern. The company says it’s now tracking metrics like emotional reliance and non-suicidal mental health crises as part of its safety testing.

The company also plans to introduce stricter safeguards for minors. A new age prediction system will soon detect underage users automatically and apply higher privacy and content protection levels.

Real-World Consequences and Legal Pressure

OpenAI’s mental health response isn’t just a technical issue; it’s also a legal and ethical challenge. The company faces a lawsuit from the parents of a 16-year-old boy who reportedly discussed suicide with ChatGPT before taking his life.

In addition, state attorneys general in California and Delaware have warned OpenAI to strengthen protections for younger users or risk facing legal action.

CEO Sam Altman recently said that OpenAI has “mitigated serious mental health issues” in its products, though he hasn’t shared full details. Still, OpenAI continues to update ChatGPT’s safety measures while also exploring new features, including adult-only modes and more open conversation settings.

Read More: OpenAI Offers Free ChatGPT Go Plan for One Year in India

The Bigger Picture

AI chatbots like ChatGPT are increasingly becoming the first point of contact for people experiencing stress, anxiety, or depression. While AI can’t replace real therapy, experts say it can serve as a bridge to professional help, especially for individuals who might not otherwise seek assistance.

As OpenAI continues to improve its models, the goal is to make AI both helpful and safe, offering comfort without crossing boundaries and guidance without replacing human care.

FAQs

1. How many people talk to ChatGPT about suicide each week?

Over one million users each week have conversations that show signs of suicidal thoughts or intent, according to OpenAI.

2. Is ChatGPT designed to help with mental health issues?

ChatGPT is not a therapist, but OpenAI is continually improving its responses to provide support and empathy and direct users to professional help when needed.

3. What is OpenAI doing to make ChatGPT safer?

OpenAI is working with over 170 mental health experts, testing new safety benchmarks, and adding stricter safeguards for children.

4. Can ChatGPT replace professional therapy?

No. AI can offer emotional support and resources, but real therapy and mental health care should come from qualified professionals.

5. Why is OpenAI facing legal challenges?

The company is being sued by the parents of a teenager who used ChatGPT before his death, raising questions about AI safety and accountability.

Hosting 75% off

Written by Hajra Naz

iPhone Users Will Soon Store U.S. Passports in Apple Wallet

iPhone Users Will Soon Store U.S. Passports in Apple Wallet

Alphabet’s X CEO Astro Teller Explains What a Moonshot Is

Alphabet’s X CEO Astro Teller Explains What a Moonshot Is