AI Tool WormGPT Enables Convincing Fake Emails For BEC Attacks

Security

A generative AI tool, WormGPT, has emerged as a powerful weapon in the hands of cyber-criminals, specifically for launching business email compromise (BEC) attacks, according to new findings shared by security firm SlashNext.

“We’re now seeing an unsettling trend among cyber-criminals on forums, evident in discussion threads offering ‘jailbreaks’ for interfaces like ChatGPT,” wrote security expert Daniel Kelley, who worked with the SlashNext team on the research. 

From a technical standpoint, these ‘jailbreaks’ are specialized prompts that Kelley said are becoming increasingly common.

“They refer to carefully crafted inputs designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content or even executing harmful code,” the security researcher said.

“The proliferation of such practices underscores the rising challenges in maintaining AI security in the face of determined cyber-criminals.”

Kelley also highlighted the advantages for BEC attacks, such as impeccable grammar in emails to reduce suspicion. The lowered entry threshold enables cyber-criminals with limited skills to execute sophisticated attacks, democratizing the use of this technology.

Read more on AI-based attacks: ChatGPT Creates Polymorphic Malware

“Not only are the emails more convincing with correct grammar, but the ability to also create them almost effortlessly has lowered the barrier to entry for any would-be criminal,” commented Timothy Morris, chief security advisor at Tanium. “Not to mention the ability to increase the pool of potential victims since language is no longer an obstacle.”

To safeguard against AI-driven BEC attacks, experts believe organizations must implement strong preventative measures. 

This includes developing extensive training programs to educate employees about AI-enhanced BEC threats, implementing stringent email verification processes and utilizing systems to flag potentially malicious emails.

“Effective, existing security awareness and behavior change programs protect against AI-augmented phishing attacks,” explained Mika Aalto, co-Founder and CEO at Hoxhunt.

“Within your holistic cybersecurity strategy, be sure to focus on your people and their email behavior because that is what our adversaries are doing with their new AI tools.”

The SlashNext findings come days after Kaspersky shed light on a new malicious campaign relying on email attacks to target cryptocurrency wallets.

Products You May Like

Articles You May Like

Warning: New Adware Campaign Targets Meta Quest App Seekers
US Bans Kaspersky Over Alleged Kremlin Links
92% of Organizations Hit by Credential Compromise from Social Engineering Attacks
The long-tail costs of a data breach – Week in security with Tony Anscombe
Los Angeles Public Health Department Discloses Large Data Breach

Leave a Reply

Your email address will not be published. Required fields are marked *