WormGPT: How GPT's Evil Twin Could Be Used in BEC AttacksBlack Hat AI Tool Helps Hackers Create Convincing Phishing Emails, Researchers Warn
Cybercriminals may be using a generative AI tool called WormGPT to create convincing phishing emails to support business email compromise attacks. A new survey shows that 1 in 5 people fall for the fake, AI-generated emails, according to cybersecurity researchers.
Researchers at SlashNext recently assessed WormGPT, an evil twin of OpenAI's GPT AI model designed specifically for malicious activities.
The black hat alternative to the GPT-J language model was developed in 2021, and its features include unlimited character support, chat memory retention, exceptional grammar, lowered entry threshold and code-formatting capabilities. Unlike ChatGPT, there are no restrictions on using WormGPT for illegal activities.
"The results were unsettling," according to SlashNext researchers, who instructed the tool to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice. "WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks."
Daniel Kelley, a reformed black hat computer hacker who collaborated with researchers at SlashNext, said WormGPT is trained on a diverse range of malware-related data, but the author of the tool is keeping specifics about training the model a secret.
A recent study conducted by cybersecurity firm SoSafe showed that AI bots already can write better phishing emails than humans. SoSafe's research found that phishing emails written with AI are not recognized at first glance and are opened by 78% of recipients. Of those, 21% click on potentially malicious content, such as links or attachments.
"And that's just the beginning: Technology will continue to evolve, giving cybercriminals more options or even customized solutions like WormGPT," said Niklas Hellemann, CEO and co-founder of SoSafe, "This will take personalization scaling to a new level, making these attacks even more dangerous."