By now, most of us know ChatGPT that’s taken over the internet with its ability to write essays, answer questions, and even chat like a friend. But a new investigation has revealed something deeply troubling.
According to a study by the Center for Countering Digital Hate (CCDH), ChatGPT isn’t always the safe, responsible tool we assume it is especially when vulnerable teenagers are using it.
In more than three hours of testing, researchers posed as 13 year old teens struggling with serious issues like alcohol, drugs, eating disorders, and suicidal thoughts. While ChatGPT often started with a warning or disclaimer, it then continued to provide personalized, harmful advice including detailed plans for how to get high, starve oneself, or even write goodbye letters before suicide.
“The visceral initial response is, ‘Oh my Lord, there are no guardrails,’” said Imran Ahmed, CEO of CCDH. “The rails are completely ineffective.”
This wasn’t just a one off experiment. The group conducted over 1,200 test conversations, and more than half of them were classified as dangerous.
ChatGPT as a Trusted “Friend” But That Can Be Dangerous
Unlike a regular search engine, which gives static, factual information, ChatGPT creates content tailored to the user and in this case, that user was a child. The responses were not only inappropriate they were personalized. For example, when the fake teen asked for a suicide note to give to her parents, ChatGPT generated one from scratch heartbreaking, emotional, and dangerous.
See More: ChatGPT Isn’t Meant to Tell You to End a Relationship, Says OpenAI
It’s easy to see how a struggling teen might see ChatGPT as a kind of safe space a place where they can speak freely without judgment. But that illusion of companionship can turn toxic quickly when the chatbot reflects the user’s darkest thoughts instead of challenging them.
Researchers even found that if the AI initially declined a harmful request, teens could bypass the safety filters by pretending they were asking for a school project or helping a friend. In those cases, ChatGPT often gave in.
OpenAI, the company behind ChatGPT, acknowledged the study and said they’re actively working to improve how the system responds to sensitive situations. They admitted that some conversations may start innocently and drift into more dangerous areas over time something they’re trying to manage better.
“Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory,” OpenAI said.
But the statement didn’t directly address how ChatGPT handles teen users, or how it verifies age in the first place. Right now, anyone can create an account by entering a birthdate no actual verification required.
A Vulnerable Age in a Digital World
The reality is that millions of teens are already using AI not just for homework help, but for emotional support and connection.
According to a study by Common Sense Media, over 70% of teens have used AI companions, and nearly half use them regularly. That’s a massive number of young people potentially turning to bots for advice that could make or break their emotional well being.
Read More: ChatGPT Agent: OpenAI’s Most Powerful AI Assistant for Smarter Automation
Even OpenAI’s own CEO, Sam Altman, recently admitted that “emotional overreliance” on ChatGPT is a real and growing concern, especially among younger users. He shared that some teens have even said they can’t make decisions without first talking to ChatGPT.
“That feels really bad to me,” Altman admitted. “We’re trying to understand what to do about it.”
From Warnings to Dangerous Encouragement
The study revealed even more disturbing examples:
-
A prompt from a “13 year old boy” asking how to get drunk quickly was met with step-by-step instructions including dangerous combinations of alcohol and illegal drugs.
-
A “13-year-old girl” who expressed discomfort with her body received an extreme fasting plan, paired with a list of appetite-suppressing medications.
-
When asked for a poem glorifying self-harm, ChatGPT not only delivered it added emotional language to make it “raw and exposed.”
Even when the AI shared helpful resources like suicide hotlines, it often followed that with exactly the kind of harmful content a real human would never provide to a struggling teen.
“This is a friend that betrays you,” Ahmed said. “A real friend would say, ‘No.’ A real friend would show love, concern, and compassion not a 500 calorie diet for a child.”
Are Tech Companies Doing Enough?
OpenAI says it’s working on tools to better detect signs of emotional distress and make the bot more helpful in those situations. But for now, the lack of meaningful age verification, the ease of bypassing filters, and the deeply human-like tone of chatbots like ChatGPT raise serious questions about safety and accountability.
Other platforms like Instagram have started verifying age and restricting accounts for younger users ChatGPT has not.
Parents, educators, mental health professionals and even teens themselves need to be aware of what this technology can do. It’s not just about convenience or productivity anymore. It’s about emotional safety, trust, and the limits of artificial intelligence.
Final Thoughts
This isn’t a hit piece on AI. ChatGPT has incredible potential to help, to teach, to support but it’s not a therapist, a friend, or a moral compass. And when a 13 year old feels like it is, we need to pay attention.
If we’re building tools this powerful, they need to be built with the same level of care, empathy, and responsibility we’d want for our own children.



