The Offensive: AI as a force multiplier for adversaries
Adversaries have never been shy about abusing new technology. While ethical developers spent 2024 and 2025 building guardrails, threat actors spent that same time figuring out how to route around them. The democratization of powerful LLMs has lowered the barrier to entry for complex attacks and turned script kiddies into far more capable operators.
1. Malicious LLMs: From WormGPT to DarkBART-Style Models
The emergence of models like WormGPT and its successors marked a clear shift. These are effectively “jailbroken” LLMs trained on malware code, exploit writeups, and phishing templates—without the ethical constraints of mainstream tools.
Instead of simply helping an attacker write a generic phishing email, malicious LLMs now generate hyper‑personalized, context‑aware lures tuned to a target’s role, region, and internal jargon. They strip out classic red flags (poor grammar, odd phrasing), making it far easier for these messages to bypass traditional secure email gateways (SEGs) and trick even savvy users.
2. AI-Amplified Social Engineering and Deepfakes
We’ve entered the era of technology‑enhanced social engineering. The now‑well‑known Hong Kong “deepfake CFO” case where an employee was tricked into authorizing a large wire transfer during a video call where the “colleagues” were AI‑generated fakes is no longer an outlier. It’s a preview.
Today’s social engineering blends:
Real-time voice cloning to impersonate executives or vendors on calls.
Video synthesis and face swapping to fake presence in meetings.
LLM‑written scripts and emails that sustain long‑running scams with consistent tone and believable detail.
These multi‑modal attacks directly weaponize human trust. For a deeper dive into how deepfakes and GenAI are changing social engineering tradecraft and how verification processes are being inverted. See ourSocial Engineering Guide and our Tradecraft Tuesday recap onAI: Friend or Faux in Cybersecurity?.
3. Polymorphic Malware and Automated Vulnerability Research
Attackers are also using GenAI to automate some of the most labor‑intensive parts of the kill chain.
Polymorphic malware:
As we’ve covered in our guide topolymorphic viruses, polymorphic malware mutates its code or appearance on each execution while keeping its malicious behavior intact. GenAI can now dynamically:Rewrite code structure and encryption routines
Rotate keys and obfuscation strategies
Generate many slightly different variants on demand
This makes simple, signature‑based defenses effectively useless. Detection has to pivot to behavior, telemetry, and human‑guided analysis rather than static hashes.
Automated vulnerability research (“vibe coding” for exploits):
Adversaries are feeding large proprietary codebases into LLMs to identify potential weaknesses at a scale humans can’t match. In some cases, models can propose exploit patterns or proof‑of‑concepts directly, dramatically shortening the time from bug discovery to weaponization. That’s “vibe coding” applied to offensive security.