City
Epaper

Study shows AI chatbots can blindly repeat incorrect medical details

By IANS | Updated: August 7, 2025 17:44 IST

New Delhi, Aug 7 Amid increasing presence of Artificial Intelligence tools in healthcare, a new study warned that ...

Open in App

New Delhi, Aug 7 Amid increasing presence of Artificial Intelligence tools in healthcare, a new study warned that AI chatbots are highly vulnerable to repeating and elaborating on false medical information.

Researchers at the Icahn School of Medicine at Mount Sinai, US, revealed a critical need for stronger safeguards before such tools can be trusted in health care.

The team also demonstrated that a simple built-in warning prompt can meaningfully reduce that risk, offering a practical path forward as the technology rapidly evolves.

"What we saw across the board is that AI chatbots can be easily misled by false medical details, whether those errors are intentional or accidental," said lead author Mahmud Omar, from the varsity.

"They not only repeated the misinformation but often expanded on it, offering confident explanations for non-existent conditions. The encouraging part is that a simple, one-line warning added to the prompt cut those hallucinations dramatically, showing that small safeguards can make a big difference," Omar added.

For the study, detailed in the journal Communications Medicine, the team created fictional patient scenarios, each containing one fabricated medical term such as a made-up disease, symptom, or test, and submitted them to leading large language models.

In the first round, the chatbots reviewed the scenarios with no extra guidance provided. In the second round, the researchers added a one-line caution to the prompt, reminding the AI that the information provided might be inaccurate.

Without that warning, the chatbots routinely elaborated on the fake medical detail, confidently generating explanations about conditions or treatments that do not exist. But, with the added prompt, those errors were reduced significantly.

The team plans to apply the same approach to real, de-identified patient records and test more advanced safety prompts and retrieval tools.

They hope their "fake-term" method can serve as a simple yet powerful tool for hospitals, tech developers, and regulators to stress-test AI systems before clinical use.

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Open in App

Related Stories

CricketShaheen Afridi Ruled Out of Remainder of BBL 2025–26 Due to Knee Injury

NationalKerala: V-P Radhakrishnan reaffirms Sree Narayana Guru's legacy as Shashi Tharoor's book released at Sivagiri

NationalNIA charge sheets five in Maoist explosives supply case in Chhattisgarh

CricketPerry, Sutherland, Norris unavailable for WPL 2026; teams name replacements

InternationalIANS Year Ender 2025: Nine more global honours recognise PM Modi's statesmanship, India's expanding influence

Technology Realted Stories

Technology'Aim to see HAL choppers flying globally': Civil Aviation Minister Naidu flags off inaugural flight of Dhruv NG helicopter

TechnologyGovt inks contracts worth Rs 4,666 crore for close quarter battle carbine, heavy weight torpedoes

TechnologyHigh superbug load in Delhi environment posing public health risks: Study

TechnologyIndia in world’s top 2 list as social protection coverage of citizens crosses 64 pc

TechnologyIndian study shows drug-resistant fungus turning more deadly, spreading globally