How attackers use AI
AI has allowed attackers to radically reduce the cost per attack while increasing their chances of success.
Scalability and democratization
Historically, a successful business email compromise (BEC) attack required time and effort to research targets and draft convincing messages. Today, AI can scour social media, corporate websites, and public data to generate thousands of hyper-personalized lures in the time it once took to draft one. Typical red flags, like poor grammar, are easily eliminated.
Turnkey attacks
To get around the safeguards of mainstream AI models, cybercriminals are experimenting with modified or “jailbroken” versions, as well as tools marketed on underground forums, such as WormGPT and FraudGPT. These services are often positioned as plug-and-play solutions for generating phishing lures, malware scripts, and other malicious content.
In practice, the effectiveness of many of these tools varies. However, as seen with the rise of ransomware-as-a-service (RaaS), the broader shift toward more accessible, service-based attack models is already having a big impact.
Crafting convincing phishing messages
Attackers can use GenAI to match the style of a targeted executive, using social media or compromised emails as examples. LLMs can also tailor messages to broader targets while mimicking specific personas—for example, an email targeting an accountant might use formal, urgent financial terminology. Polymorphic phishing uses AI to generate variations on the same phishing lure to bypass email filters.
Increasingly, multimodal AI is being used for deepfake audio and video. In a high-profile incident targeting multinational engineering firm Arup, deepfake technology was used to impersonate multiple executives during a video call, leading to a $25.6 million fraudulent transfer.
Automating reconnaissance
Using AI, adversaries can cut down one of the most time-intensive aspects of cyberattacks: reconnaissance. While attackers were already using automation to scan internet-facing systems (VPNs, email servers) for open ports and other vulnerabilities, AI can analyze and prioritize large volumes of scan data.
AI agents can aggregate data from LinkedIn, news reports, and social media to map out a company’s hierarchy and identify the most likely "human vulnerabilities." These agents can also use public information to identify relationships between vendors, subsidiaries, and employees that can be exploited in supply chain attacks.