ChatGPT to No Longer Provide Direct Advice on Mental Health and Personal Decisions

By Lokmat Times Desk | Updated: August 7, 2025 17:31 IST2025-08-07T17:30:25+5:302025-08-07T17:31:38+5:30

In a major development, that is set to affect millions across the globe ChatGPT will no longer give direct ...

ChatGPT to No Longer Provide Direct Advice on Mental Health and Personal Decisions | ChatGPT to No Longer Provide Direct Advice on Mental Health and Personal Decisions

ChatGPT to No Longer Provide Direct Advice on Mental Health and Personal Decisions

In a major development, that is set to affect millions across the globe ChatGPT will no longer give direct answers to queries involving mental health, emotional distress, or deeply personal decisions, affecting millions of users worldwide. Instead of offering advice like a digital therapist, the AI now responds with gentle prompts encouraging self-reflection and exploration. The move stems from a growing concern: people had started using the AI not just for information, but as a source of emotional guidance.

OpenAI noticed that users were increasingly turning to ChatGPT with questions like “Should I leave my partner?” or “Am I making the right life decision?” These are deeply personal, emotionally complex issues. While ChatGPT could generate thoughtful responses, OpenAI recognized that giving advice in such moments risks emotional overdependence and misplaced trust in a machine. Rather than blur the lines between AI and human empathy, OpenAI decided to pull back, choosing ethical responsibility over engagement metrics. Instead of giving a yes or no, ChatGPT now offers non-directive responses. These include open-ended questions, suggestions to consider different perspectives, and encouragement to consult trusted people or professionals. The goal is to help users think more clearly, not to decide for them. For example, someone asking about a major life decision might now see a response that encourages weighing pros and cons or considering long-term impacts.

OpenAI clarified that the tool will now refrain from answering high-stakes personal questions directly. To further support this shift, OpenAI is forming an advisory group comprising experts in human-computer interaction, youth development, and mental health. “We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal ‘yes’ is our work,” said the company’s blog post, The Guardian reported. Back in May, Altman noted that ChatGPT had become “annoying” and overly sycophantic after users complained about the platform feeding the users to continue the conversation. “The last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it). We are working on fixes asap, some today and some this week. At some point we will share our learnings from this, it’s been interesting,” Altman said.

 

 

 

 

 

Open in app