City
Epaper

Study shows AI chatbots can blindly repeat incorrect medical details

By IANS | Updated: August 7, 2025 17:44 IST

New Delhi, Aug 7 Amid increasing presence of Artificial Intelligence tools in healthcare, a new study warned that ...

Open in App

New Delhi, Aug 7 Amid increasing presence of Artificial Intelligence tools in healthcare, a new study warned that AI chatbots are highly vulnerable to repeating and elaborating on false medical information.

Researchers at the Icahn School of Medicine at Mount Sinai, US, revealed a critical need for stronger safeguards before such tools can be trusted in health care.

The team also demonstrated that a simple built-in warning prompt can meaningfully reduce that risk, offering a practical path forward as the technology rapidly evolves.

"What we saw across the board is that AI chatbots can be easily misled by false medical details, whether those errors are intentional or accidental," said lead author Mahmud Omar, from the varsity.

"They not only repeated the misinformation but often expanded on it, offering confident explanations for non-existent conditions. The encouraging part is that a simple, one-line warning added to the prompt cut those hallucinations dramatically, showing that small safeguards can make a big difference," Omar added.

For the study, detailed in the journal Communications Medicine, the team created fictional patient scenarios, each containing one fabricated medical term such as a made-up disease, symptom, or test, and submitted them to leading large language models.

In the first round, the chatbots reviewed the scenarios with no extra guidance provided. In the second round, the researchers added a one-line caution to the prompt, reminding the AI that the information provided might be inaccurate.

Without that warning, the chatbots routinely elaborated on the fake medical detail, confidently generating explanations about conditions or treatments that do not exist. But, with the added prompt, those errors were reduced significantly.

The team plans to apply the same approach to real, de-identified patient records and test more advanced safety prompts and retrieval tools.

They hope their "fake-term" method can serve as a simple yet powerful tool for hospitals, tech developers, and regulators to stress-test AI systems before clinical use.

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Open in App

Related Stories

InternationalIndia, ASEAN discuss ways to strengthen Comprehensive Strategic Partnership

National"Empowering": Woman activist Ritu Narang hails Centre's push for women's reservation in legislative bodies

BusinessBlueEra Super App Bets on Hyperlocal Growth, Aims to Empower Local Businesses and Gig Workers

BusinessThe Battle for Blindness Foundation Celebrates Historic World Cup Triumph of Blind Cricketer Simu Das

BusinessDeposit growth outpaces credit in Q4FY26, but margin pressure weighs on banks: Report

Health Realted Stories

HealthAyush Ministry to showcase research, health initiatives on World Homoeopathy Day

HealthJharkhand HC seeks detailed probe report on HIV-infected blood transfusion in Chaibasa

HealthTejashwi Yadav targets Health Minister Mangal Pandey over viral Gaya hospital video

Health‘Poshan Pakhwada 2026’ to focus on maximising brain development in 1st 6 years of life

HealthTurmeric Milk Benefits: Know When to Drink It for Best Results