Hackers create ChatGPT-driven Telegram bots that can write malware

By IANS | Published: February 12, 2023 11:24 AM2023-02-12T11:24:03+5:302023-02-12T11:35:06+5:30

New Delhi, Feb 12 Cyber-criminals are using Microsoft-owned ChatGPT to create Telegram bots that can write malware and ...

Hackers create ChatGPT-driven Telegram bots that can write malware | Hackers create ChatGPT-driven Telegram bots that can write malware

Hackers create ChatGPT-driven Telegram bots that can write malware

New Delhi, Feb 12 Cyber-criminals are using Microsoft-owned ChatGPT to create Telegram bots that can write malware and steal your data, new research has revealed.

Currently, if you ask ChatGPT to write a phishing email impersonating a bank or create malware, it will not generate it.

However, hackers are working their way around ChatGPT's restrictions and there is an active chatter in the underground forums disclosing how to use OpenAI API to bypass ChatGPT's barriers and limitations.

"This is done mostly by creating Telegram bots that use the API. These bots are advertised in hacking forums to increase their exposure," according to CheckPoint Research (CPR).

The cyber-security company had earlier discovered that cybercriminals were using ChatGPT to improve the coding in basic Infostealer malware from 2019.

There have been many discussions and research on how cybercriminals are leveraging the OpenAI platform, specifically ChatGPT, to generate malicious content such as phishing emails and malware.

The current version of OpenAI's API is used by external applications and has very few anti-abuse measures in place.

As a result, it allows malicious content creation, such as phishing emails and malware code, without the limitations or barriers that ChatGPT has set on their user interface.

In an underground forum, CPR found a cybercriminal advertising a newly created service a Telegram bot using OpenAI API without any limitations and restrictions.

"A cybercriminal created a basic script that uses OpenAI API to bypass anti-abuse restrictions," the researchers noted.

The cyber-security company has also witnessed attempts by Russian cybercriminals to bypass OpenAI's restrictions, in order to use ChatGPT for malicious purposes.

Cybercriminals are growing more and more interested in ChatGPT, because the AI technology behind it can make a hacker more cost-efficient.

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Open in app