How attackers use AI
AI has allowed attackers to radically reduce the cost per attack while increasing their chances of success.
Scalability and democratization
Historically, a successful business email compromise (BEC) attack required time and effort to research targets and draft convincing messages. Today, AI can scour social media, corporate websites, and public data to generate thousands of hyper-personalized lures in the time it once took to draft one. Typical red flags, like poor grammar, are easily eliminated. In addition to this, attackers are able to craft thousands of emails and infrastructure at scale.
Turnkey attacks
To get around the safeguards of mainstream AI models, cybercriminals are experimenting with modified or “jailbroken” versions, as well as tools marketed on underground forums, such as WormGPT and FraudGPT. These services are often positioned as plug-and-play solutions for generating phishing lures, malware scripts, and other malicious content.
In practice, the effectiveness of many of these tools varies. However, as seen with the rise of ransomware-as-a-service (RaaS), the broader shift toward more accessible, service-based attack models is already having a big impact.
Crafting convincing phishing messages
Attackers can use GenAI to match the style of a targeted executive, using social media or compromised emails as examples. LLMs can also tailor messages to broader targets while mimicking specific personas—for example, an email targeting an accountant might use formal, urgent financial terminology. Polymorphic phishing uses AI to generate variations on the same phishing lure to bypass email filters.
Increasingly, multimodal AI is being used for deepfake audio and video. In a high-profile incident targeting multinational engineering firm Arup, deepfake technology was used to impersonate multiple executives during a video call, leading to a $25.6 million fraudulent transfer.
It's not just the lure that's gotten better, it's where the lure takes you.
Mark O'Halloran, a security operations analyst at Huntress, recently came across a fake website impersonating Claude and Anthropic that stopped him in his tracks. Not because it was obviously malicious but because it wasn't. The page was polished, brand-accurate, and looked nothing like the misspelled, broken-UI phishing pages of even a few years ago.
The reason? Threat actors don't need to know how to build a website anymore. They can open a coding assistant and say, "make me a page that copies Anthropic's design style" and have something convincing in minutes. That's the shift. The technical barrier is gone.
But look closer, and the tells are still there. Every button on the fake page prompted the user to execute a command. A section label read "Desktop" twice. The FAQ didn't work at all. These pages are vibe-coded—spun up fast, optimized for one purpose, and not built to hold up under scrutiny. That's actually your edge: the site exists to get one action out of you, and anything beyond that function is broken or missing.
Automating reconnaissance
Using AI, adversaries can cut down one of the most time-intensive aspects of cyberattacks: reconnaissance. While attackers were already using automation to scan internet-facing systems (VPNs, email servers) for open ports and other vulnerabilities, AI can analyze and prioritize large volumes of scan data.
AI agents can aggregate data from LinkedIn, news reports, and social media to map out a company’s hierarchy and identify the most likely "human vulnerabilities." These agents can also use public information to identify relationships between vendors, subsidiaries, and employees that can be exploited in supply chain attacks.
Enhancing malware evasion
Using AI tools, attackers can craft polymorphic or custom malware that changes with every deployment, making it difficult for signature-based detection tools like traditional AV to catch it. To bypass behavioral detection, AI-assisted BYOVD (bring your own vulnerable driver) tools load legitimate but vulnerable Windows drivers to disable tools like EDR.