City
Epaper

Families sue OpenAI over alleged suicides, psychological harm linked to ChatGPT: Report

By IANS | Updated: November 8, 2025 10:05 IST

New Delhi, Nov 8 ChatGPT maker OpenAI is facing more lawsuits from families who claim that the AI ...

Open in App

New Delhi, Nov 8 ChatGPT maker OpenAI is facing more lawsuits from families who claim that the AI company’s GPT-4o model was released prematurely, which allegedly contributed to suicides and psychological harm, according to reports.

US-based OpenAI released the GPT-4o model in May 2024, when it became the default model for all users.

In August, OpenAI launched GPT-5 as the successor to GPT-4o, but “these lawsuits particularly concern the 4o model, which had known issues with being overly sycophantic or excessively agreeable, even when users expressed harmful intentions,” according to a report in TechCrunch.

The report said that while four of the lawsuits address ChatGPT’s alleged role in family members’ suicides, three claim that ChatGPT reinforced harmful delusions that in some cases resulted in inpatient psychiatric care.

According to the report, the lawsuits also claim that OpenAI rushed safety testing to beat Google’s Gemini to market.

OpenAI was yet to comment on the report.

Recent legal filings allege that ChatGPT can encourage suicidal people to act on their plans and inspire dangerous delusions.

“OpenAI recently released data stating that over one million people talk to ChatGPT about suicide weekly,” the report mentioned.

In a recent blog post, OpenAI said that it worked with more than 170 mental health experts to help ChatGPT more reliably recognise signs of distress, respond with care, and guide people toward real-world support–reducing responses that fall short of our desired behaviour by 65-80 per cent.

“We believe ChatGPT can provide a supportive space for people to process what they’re feeling, and guide them to reach out to friends, family, or a mental health professional when appropriate,” it noted.

“Going forward, in addition to our longstanding baseline safety metrics for suicide and self-harm, we are adding emotional reliance and non-suicidal mental health emergencies to our standard set of baseline safety testing for future model releases,” OpenAI added.

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Open in App

Related Stories

InternationalPak-Afghan deteriorating relation: From being viewed in a tea cup to an ethnic narrative

NationalMP: Over 50,000 Cong workers to hold week-long protest at Delhi's Ramleela Maidan, says Jitu Patwari

InternationalMEA Secy (West) discusses bilateral relations in 11th India-Armenia Foreign Office Consultations

NationalPak-Afghan deteriorating relation: From being viewed in a tea cup to an ethnic narrative

BusinessMAHE Bengaluru Open House 3.0: From Legacy to the Frontiers of Futuristic Learning

Business Realted Stories

BusinessHettich Transforms Spaces Through Magical Motion at Acetech with Their Innovative German Solutions

BusinessThe Trident Group and PGTI Jointly Announce the Inaugural 'Trident Open' Golf Tournament

BusinessHandloom & textile sectors are Manipur economy’s heartbeat: Guv Bhalla

BusinessISRO receives advanced Chandrayaan-2 polar data to aid future lunar exploration

BusinessCentre generates over 67.94 lakh digital life certificates for pensioners this year