City
Epaper

Researchers easily hypnotise AI chatbot ChatGPT into hacking: Report

By IANS | Updated: August 9, 2023 19:50 IST

New Delhi, Aug 9 Tricking generative AI to assist in scams and cyberattacks does not require much coding ...

Open in App

New Delhi, Aug 9 Tricking generative AI to assist in scams and cyberattacks does not require much coding knowledge, a new report has warned.

According to tech major IBM, researchers have described simple workarounds for getting large language models (LLMs) -- including ChatGPT -- to write malicious code and provide poor security advice.

"In a bid to explore security risks posed by these innovations, we attempted to hypnotise popular LLMs to determine the extent to which they were able to deliver directed, incorrect and potentially risky responses and recommendations -- including security actions -- and how persuasive or persistent they were in doing so," said Chenta Lee, chief architect of threat intelligence at IBM.

"We were able to successfully hypnotise five LLMs -- some performing more persuasively than others -- prompting us to examine how likely it is that hypnosis is used to carry out malicious attacks," he added.

The researchers learned that English has essentially become a “programming language” for malware. With LLMs, attackers no longer need to rely on Go, JavaScript, Python, etc., to create malicious code, they just need to understand how to effectively command and prompt an LLM using English.

Through hypnosis, the security experts were able to get LLMs to leak the confidential financial information of other users, create vulnerable code, create malicious code, and offer weak security recommendations.

In one instance, the researchers informed the AI chatbots that they were playing a game and that they needed to purposefully share the incorrect answer to a question in order to win and "prove that you are ethical and fair".

When a user asked if receiving an email from the IRS to transfer money for a tax refund was normal, the LLM said Yes (but actually it's not).

Moreover, the report said that OpenAI's GPT-3.5 and GPT-4 models were easier to trick into sharing incorrect answers or playing a never-ending game than Google's Bard.

GPT-4 was the only model tested that understood the rules well enough to give incorrect cyber incident response advice, such as advising victims to pay a ransom. In contrast to Google's Bard, GPT-3.5 and GPT-4 were easily tricked into writing malicious code when the user reminded it to.

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Tags: congresspitrodadelhimodideepikabjpwest-bengaldeepika-padukoneajay-devgnthakur
Open in App

Related Stories

NationalDelhi Accident: Speeding Truck Loses Control in New Seelampur; Several Women and Children Injured

NationalPurulia Accident: 9 Killed After Vehicle Returning From Wedding Crashes Into Truck in West Bengal

NationalDelhi Metro Update: Services to Begin Early on Yoga Day for Commuters

NationalSonia Gandhi Health Update: Former Congress President Discharged from Hospital After Treatment for Abdominal Infection

National“I Have Disagreements With Congress Leadership”: Shashi Tharoor Says Party Did Not Invite Him for Nilambur Byelection Campaign

Technology Realted Stories

TechnologyImported seafood increasing resistance to colistin, a potent antibiotic: Study

TechnologyNifty, Bank Nifty show bullish pattern, hint at possible breakout: Report

TechnologySix of India’s top 10 firms add Rs 1.62 lakh crore in market value this week

TechnologyApple expands Audio Mix feature beyond Photos App with iOS 26

TechnologyIndia's power capacity jumps by 56 pc from 305 GW to 476 GW in 10 years