Israeli researchers discover security flaw in popular AI chatbots
By IANS | Updated: June 30, 2025 21:19 IST2025-06-30T21:14:27+5:302025-06-30T21:19:00+5:30
Jerusalem, June 30 Israeli researchers have uncovered a security flaw in some of the popular Artificial Intelligence (AI) ...

Israeli researchers discover security flaw in popular AI chatbots
Jerusalem, June 30 Israeli researchers have uncovered a security flaw in some of the popular Artificial Intelligence (AI) chatbots, including ChatGPT, Claude, and Google Gemini, Ben-Gurion University of the Negev said in a statement on Monday.
The researchers found that these systems can be manipulated into providing illegal and unethical information, despite having built-in safety protective measures, according to the statement.
The study described how attackers can use carefully written prompts, known as jailbreaks, to bypass the chatbots' safety mechanisms.
Once the protections are disabled, the chatbots consistently provide harmful content, such as instructions for hacking, producing illegal drugs, and committing financial crimes, Xinhua news agency reported. In every test case, the chatbots responded with detailed, unethical information after the jailbreak was applied.
The researchers explained that this vulnerability is easy to exploit and works reliably.
Because these tools are freely available to anyone with a smartphone or computer, the risk is especially concerning, the researchers noted.
They also warned about the emergence of dark language models. These are AI systems that have either been intentionally stripped of ethical safeguards or developed without any safety controls in place.
Some of these models are already being used for cybercrime and are shared openly on underground networks, they added.
The team reported the issue to several major AI companies. However, responses were limited. One company did not reply, while others said the problem does not qualify as a critical flaw.
The researchers called for stronger protections, clearer industry standards, and new techniques that allow AI systems to forget harmful information.
Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor
Open in app