What is a WormGPT?
WormGPT was a malicious generative AI tool launched in early 2023, designed to facilitate Business Email Compromise (BEC) and phishing attacks. By removing the ethical "guardrails" found in mainstream AI, it allowed threat actors to automate the creation of sophisticated, multilingual social engineering lures. Though the original project was shuttered in August 2023, it served as a proof-of-concept for the "jailbroken" AI era.
Key Takeaways
Purpose-Built for Crime: WormGPT was one of the first widely recognized commercialized "Dark LLMs," created to bypass the safety mechanisms found in mainstream tools like ChatGPT and Google Gemini.
The "Flawless" Phishing Threat: Its primary impact was the elimination of common red flags in phishing (such as poor grammar), allowing non-native speakers to produce professional-grade, persuasive BEC attacks.
Lowered Barrier to Entry: The tool democratized sophisticated cybercrime, enabling novice attackers with limited technical or linguistic skills to conduct high-level social engineering.
Project Shutdown (August 2023): Following heavy media scrutiny and security research reporting, the original developer (alias "Last") officially shut down the project. However, the brand name has since been co-opted by numerous copycat services and variants.
Defense Strategy: Protecting against AI-augmented attacks requires a shift toward behavioral email security, "out-of-band" verification for financial requests, and updated employee training that de-emphasizes grammar as a marker of trust.
What is WormGPT and where did it come from?
WormGPT surfaced in March 2023 on popular cybercrime forums and was widely reported on by security researchers by July of that year. The tool was developed by a threat actor using the alias "Last" who marketed it as "the blackhat alternative to GPT models." It was offered as a subscription-based service — reportedly around €60–€100 per month or €550 per year — giving paying cybercriminals access through a private web interface.
The tool is believed to be built on GPT-J, an open-source large language model (LLM) originally developed by EleutherAI in 2021. GPT-J is a legitimate, publicly available model; but the WormGPT developer fine-tuned and retrained it on datasets related to malware development, phishing techniques, and other attack methodologies. The result was a generative AI chatbot that had no content moderation, no ethical boundaries, and no restrictions on the type of output it would produce.
By late summer 2023, the original developer reportedly shut down the public-facing version of WormGPT, citing concerns about the media attention and potential legal scrutiny. However, cybersecurity researchers have noted that variants, copycats, and successor tools (such as FraudGPT, DarkBARD, and Evil-GPT) quickly filled the void on dark web marketplaces. The underlying concept — weaponizing generative AI for cybercrime — is very much alive.
The FBI has publicly warned about the increasing use of AI in cybercrime schemes, and CISA has issued guidance emphasizing the need for organizations to adapt their defenses as AI-powered threats evolve.
How WormGPT works
At its core, WormGPT functions much like any other generative AI chatbot. A user types in a prompt — a question or instruction — and the model generates a text-based response. The critical difference is what it's willing to generate.
Mainstream AI tools like ChatGPT, Google Gemini, and Microsoft Copilot have extensive content policies and safety filters built in. Ask ChatGPT to write a phishing email, and it will refuse. Ask it to generate ransomware code, and it will decline. These guardrails exist because the companies behind those tools have invested heavily in responsible AI development.
WormGPT was built to have none of those restrictions. Its training data was deliberately curated to include:
Malware source code and development techniques
Phishing email templates and social engineering scripts
Exploit documentation and vulnerability information
Data related to business email compromise tactics and fraud schemes
When a cybercriminal gives WormGPT a prompt like "Write a convincing email from a CEO to a finance manager requesting an urgent wire transfer," the tool generates polished, persuasive, contextually appropriate text — complete with the professional tone, urgency cues, and formatting that make BEC attacks so effective.
The model also supports multiple languages, which is significant. Historically, one of the telltale signs of a phishing email was poor grammar or awkward phrasing — often because the attacker was not a native speaker of the target's language. WormGPT eliminates that indicator almost entirely, producing fluent content in English, Spanish, French, German, and other languages.
How bad threat actors use WormGPT
WormGPT's capabilities make it a versatile tool for a range of cyberattacks. Here are the most common use cases security researchers have identified:
1. Business Email Compromise (BEC)
BEC is one of the most financially damaging forms of cybercrime. According to the FBI's Internet Crime Complaint Center (IC3), BEC attacks accounted for over $2.9 billion in reported losses in 2023 alone.
WormGPT excels at generating the type of carefully worded, authoritative emails that power these scams. It can impersonate executives, vendors, attorneys, or other trusted figures — crafting messages that pressure victims into transferring funds, sharing credentials, or divulging sensitive information.
2. Phishing and Spear Phishing
Beyond BEC, WormGPT can generate phishing emails at scale. Attackers can use it to:
Create highly personalized spear phishing emails targeting specific individuals based on publicly available information (from LinkedIn profiles, company websites, etc.)
Produce mass phishing campaigns with varied language so that no two emails are identical, making them more difficult for email security tools to catch using signature-based detection
Write convincing pretexting scenarios for vishing or smishing (SMS phishing) scripts
3. Malware and Exploit Development
While phishing is its primary use case, WormGPT can also assist threat actors in writing malicious code. Researchers have demonstrated the tool generating:
Python-based malware scripts
Code for data exfiltration tools
Scripts designed to exploit known vulnerabilities
This capability is particularly concerning because it allows less technically skilled attackers to produce functional malicious tools they might otherwise lack the expertise to develop from scratch.
4. Social Engineering at Scale
WormGPT enables attackers to rapidly produce variations of social engineering content, including fake customer service scripts, fraudulent website copy, and deceptive social media messages. The speed and quality of output mean a single attacker can run campaigns that would previously have required a team.
WormGPT vs. ChatGPT: What's the difference?
Understanding the distinction between WormGPT and legitimate AI tools is important for cybersecurity professionals educating their organizations.
Feature | ChatGPT (and similar tools) | WormGPT |
Developer | OpenAI (legitimate company) | Anonymous threat actor ("Last") |
Ethical guardrails | Extensive content policies and safety filters | None — no restrictions on output |
Training data | Broad internet text, curated and filtered | Supplemented with malware, phishing, and exploit data |
Intended use | Productivity, education, creative work | Cybercrime — phishing, BEC, malware generation |
Availability | Public (free and paid tiers) | Dark web forums, subscription-based |
Content moderation | Active — refuses harmful requests | None — fulfills any request |
Language support | Multilingual | Multilingual |
Legal status | Legal | Illegal (facilitates criminal activity) |
It is worth noting that threat actors also attempt to "jailbreak" legitimate AI tools — using carefully crafted prompts to trick ChatGPT or similar platforms into bypassing their safety filters. While companies like OpenAI continuously patch these jailbreak methods, it remains an ongoing cat-and-mouse game. WormGPT sidesteps this entirely by simply having no filters to bypass.
Real-world risks and impact
The emergence of WormGPT and the wave of malicious AI tools that followed represents a meaningful shift in the cybersecurity threat landscape. Here's why security professionals should take it seriously:
The Democratization of Cybercrime
Before tools like WormGPT, executing a convincing BEC attack or writing functional malware required a certain level of skill. The attacker needed to understand social engineering psychology, write persuasively in the target's language, or possess coding expertise. WormGPT collapses those barriers. An attacker with minimal skills and a subscription fee can now produce enterprise-grade social engineering content in seconds.
Volume and Speed
Generative AI doesn't just improve attack quality it dramatically increases the speed and volume at which attacks can be produced. A single threat actor using WormGPT can generate hundreds of unique, polished phishing emails in the time it previously took to write one. These scalability challenges traditional email security defenses that rely on identifying repeated patterns or known malicious templates.
Erosion of Traditional Red Flags
For years, cybersecurity awareness training has taught employees to look for common phishing indicators: misspellings, awkward grammar, generic greetings, and implausible scenarios. AI-generated phishing content can eliminate most of these telltale signs, making it significantly harder for humans to distinguish malicious emails from legitimate ones by visual inspection alone.
The Ecosystem Effect
WormGPT was just the beginning. In the months following its appearance, security researchers documented a growing ecosystem of malicious AI tools:
FraudGPT — marketed for creating phishing pages, scam emails, and cracking tools
DarkBARD — positioned as a dark web version of Google's Bard
Evil-GPT — another ChatGPT alternative without restrictions
PoisonGPT — designed to spread disinformation
This proliferation suggests that malicious AI is now a permanent fixture of the cybercrime landscape, not a passing trend.
Defending against WormGPT-powered attacks
For cybersecurity professionals, defending against AI-generated threats requires evolving both technology and training. Here are practical strategies:
1. Deploy AI-Powered Email Security
Traditional email gateways that rely on known signatures and blocklists are increasingly insufficient against AI-generated content. Modern email security solutions use machine learning and natural language processing (NLP) to analyze the intent, tone, and context of messages detecting anomalies even when the content is grammatically perfect and visually convincing.
2. Upgrade Security Awareness Training
Employee security awareness training programs need to be updated to reflect the new reality of AI-powered phishing. Key updates include:
Teaching employees that perfect grammar and professional tone are no longer reliable indicators of legitimacy
Emphasizing procedural verification — always confirming unusual requests (especially financial ones) through a separate, trusted communication channel
Running simulated phishing exercises that include AI-generated content to give employees realistic practice
3. Implement Strong Verification Processes
For BEC defense specifically, process is your best friend:
Require multi-person approval for wire transfers and changes to payment information
Establish verbal or in-person confirmation protocols for high-value requests
Use code words or challenge phrases for sensitive transactions
4. Adopt a Layered Security Posture
No single tool or technique will stop AI-powered attacks. A defense-in-depth approach should include:
Email authentication protocols (SPF, DKIM, and DMARC) to reduce email spoofing
Multi-factor authentication (MFA) on all accounts to limit the damage from stolen credentials
Endpoint detection and response (EDR) to catch malware that may arrive via AI-crafted phishing
Managed security services to provide 24/7 monitoring and rapid response
5. Stay Current on Threat Intelligence
The malicious AI landscape evolves quickly. Cybersecurity teams should actively monitor threat intelligence feeds, participate in information sharing organizations like ISACs, and stay current on advisories from organizations like CISA and the FBI.
Bigger picture: AI and the future of cybercrime
WormGPT is a symptom of a larger trend: the weaponization of artificial intelligence. As AI models become more powerful and more accessible and as open-source models make it easier for anyone to fine-tune a language model on custom data the barrier to creating tools like WormGPT will continue to drop.
This doesn't mean the situation is hopeless. The same AI capabilities that power offensive tools are also being deployed on the defensive side. AI-driven threat detection, automated incident response, behavioral analytics, and intelligent email filtering are all advancing rapidly.
The National Institute of Standards and Technology (NIST) is actively developing frameworks for AI risk management, and the White House Executive Order on Safe, Secure, and Trustworthy AI (October 2023) underscores the federal government's recognition that AI safety — including its implications for cybersecurity — is a national priority.
For cybersecurity professionals, the takeaway is clear: AI is now a tool in the adversary's toolkit, and your defenses need to account for that reality. Understanding tools like WormGPT isn't about fearmongering it's about being informed so you can protect your organization with the right combination of technology, training, and process.
Frequently Asked Questions