City
Epaper

ChatGPT fools scientists by writing fake research paper abstracts

By IANS | Published: January 15, 2023 2:03 PM

New Delhi, Jan 15 Artificial-Intelligence (AI) chatbot called ChatGPT has written convincing fake research-paper abstracts that scientists were ...

Open in App

New Delhi, Jan 15 Artificial-Intelligence (AI) chatbot called ChatGPT has written convincing fake research-paper abstracts that scientists were unable to spot, a new research has revealed.

A research team led by Catherine Gao at Northwestern University in Chicago used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them.

According to a report in the prestigious journal Nature, the researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine.

They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts.

The ChatGPT-generated abstracts sailed through the plagiarism checker: the median originality score was 100 per cent, which indicates that no plagiarism was detected.

The AI-output detector spotted 66 per cent the generated abstracts. But the human reviewers didn't do much better - they correctly identified only 68 per cent of the generated abstracts and 86 per cent of the genuine abstracts.

They incorrectly identified 32 per cent of the generated abstracts as being real and 14 per cent of the genuine abstracts as being generated, according to the Nature article.

"I am very worried," said Sandra Wachter from University of Oxford who was not involved in the research.

"If we're now in a situation where the experts are not able to determine what's true or not, we lose the middleman that we desperately need to guide us through complicated topics," she was quoted as saying.

Microsoft-owned software company OpenAI released the tool for public use in November and it is free to use.

"Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text," said the report.

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Tags: Catherine gaoSandra wachternatureNorthwestern UniversityNational school of sciencesIndiana university's school of medicineMit school of education & researchUniversity of northwestUniversity of michigan life sciences instituteUniversity of applied science and arts northwestern school of businessNational centre of sports science and research
Open in App

Related Stories

MaharashtraTMC Launches QR Codes for Trees, Citizens Ask What About Concrete Around Trees

TechnologyStudy finds silent hazard lurking underneath major global cities

AurangabadTeacher orientation at Winchester School

InternationalChina-Pakistan collaboration in Shaksgam Valley poses threat to India: Report

HealthHumans exceeded 7 of the 9 'safe limits' for life on Earth: Study

Technology Realted Stories

TechnologyUS FDA gives nod to Musk's Neuralink to implant brain chip in 2nd person

TechnologyNext 10 years going to be even more exciting for India’s tech journey: Rajeev Chandrasekhar

TechnologyWe can become 'Viksit Bharat' even before 2047 with robust startup ecosystem: Hardeep Puri

TechnologyCyber security company CyberArk acquires Venafi for $1.54 billion

TechnologyNo one will lecture India about 'WTO norms' anymore: Zoho's Vembu