While the nascent ChatGPT is not yet capable of hacking its way through cyber defenses, cyber criminals are demonstrating that this sort of “learning AI” presents a real threat as it develops. Researchers with Check Point have identified several discussions of malicious AI use by hackers on dark web forums, complete with functioning code posted that appears to have been created by the popular chatbot.
At the moment the AI is not capable of doing anything revolutionary, or even remarkably dangerous. But the immediate potential risk seems to be its ability to expand the pool of beginner and inexperienced “black hats” making attempts on assets that are not well-protected.
Inexperienced criminals experimenting with ChatGPT, creating basic malware and support tools
The simplicity of the malware it generates reflects the relatively early and basic state of ChatGPT, but the early criminal activity indicates what direction things may develop in as these tools become more advanced and capable.
One real world attack type that ChatGPT is currently capable of supporting is the creation of phishing emails in languages other than the attacker’s own. The AI is capable of smoothing out some of the rough edges and quirks of wording that stand out (and are sometimes tagged by automated defenses), as well as copying the style of specific company communications that are publicly available.
Another is an ability, though somewhat limited at present, to translate between different programming languages and to make endless small changes to malware code in such a way as to avoid automated defenses. Inexperienced attackers already do this to some degree by poring over coding sites such as GitHub for scraps and fragments to use, but AI will at least greatly speed up this process and may well increase its rate of success in the near future.
The creators of ChatGPT have attempted to restrict the use of the AI in this way, but the discussions that the Check Point researchers have unearthed illustrate the seeming futility of these safety guardrails. Threat actors are often able to massage the AI around its internal restrictions simply by rewording a prompt in a basic way.
ChatGPT malware used to create basic file stealer, ransomware tools in Python and Java
The earliest thread identified by the researchers documents the work of an inexperienced coder, doing their first programming ever in Python, managing to create a basic file encryption tool that could be used for ransomware. The concerning element is that this comes from a fairly well known figure on dark web forums who works as an established broker for criminal gangs.
Another hacker with some more advanced programming language posted examples of being able to order ChatGPT to copy a number of known malware samples, and to translate them between different programming languages. This forum user was also able to have ChatGPT create a simple malware program designed for stealth, using Powershell to establish a SSH and telnet client on a target system.
Even experienced hackers are still far from being able to order ChatGPT to design the custom malware of their dreams in seconds, or to automatically hack into a target, but the possibilities it already demonstrates are enough to require serious attention from security professionals.