OpenAI is changing how ChatGPT handles personal, high-stakes questions. This includes emotional or life-changing decisions. On Monday, the company announced new updates. These changes aim to help users think more clearly during tough moments.
Responses will be based on medical guidance.
One major change: ChatGPT won’t directly answer questions like “Should I break up with my boyfriend?” Instead, it will guide users to reflect. It may ask helpful questions or suggest weighing pros and cons.
According to OpenAI, “ChatGPT shouldn’t give you an answer—it should help you think it through.” This new behavior for emotionally sensitive decisions will begin rolling out soon.
OpenAI rolled back an earlier update that made ChatGPT “too agreeable.” This change helps the model respond with more nuance. It prevents ChatGPT from just repeating what users say. ChatGPT will now gently remind users to take breaks during long chats. These prompts are being refined to feel natural and helpful.
Read More: ChatGPT Agent: OpenAI’s Most Powerful AI Assistant for Smarter Automation
To support these changes, OpenAI is forming a new advisory group. It includes experts in mental health, youth development, and human-computer interaction. The company also worked with over 90 doctors from more than 30 specialties. This ensures ChatGPT responds properly to emotional distress.
Importantly, OpenAI emphasized that it doesn’t measure success by typical tech metrics like time spent or number of clicks. Instead, the goal is simple: to help users feel supported and walk away with meaningful outcomes.
“We care more about whether you leave the product having done what you came for,” the company stated. With ChatGPT now nearing 700 million weekly active users, OpenAI wants its AI to serve as a helpful companion—not just another attention trap.


