Don’t let overlooked obligations become incidents. Learn how.
Utility navigation bar redirect icon
Portal LoginSupportBlogContact
Search
Close search
Huntress Logo in Teal
  • Platform Overview
    Managed EDR

    Get full endpoint visibility, detection, and response.

    Managed EDR

    Get full endpoint visibility, detection, and response.

    Managed ITDR: Identity Threat Detection and Response

    Protect your Microsoft 365 and Google Workspace identities and email environments.

    Managed ITDR: Identity Threat Detection and Response

    Protect your Microsoft 365 and Google Workspace identities and email environments.

    Managed SIEM

    Managed threat response and robust compliance support at a predictable price.

    Managed SIEM

    Managed threat response and robust compliance support at a predictable price.

    Managed Security Awareness Training Software

    Empower your teams with science-backed security awareness training.

    Managed Security Awareness Training Software

    Empower your teams with science-backed security awareness training.

    Managed ISPM

    Continuous Microsoft 365 and identity hardening, managed and enforced by Huntress experts.

    Managed ISPM

    Continuous Microsoft 365 and identity hardening, managed and enforced by Huntress experts.

    Managed ESPM

    Proactively secure endpoints against attacks.

    Managed ESPM

    Proactively secure endpoints against attacks.

    Integrations
    Integrations
    Support Documentation
    Support Documentation
    See Huntress in Action

    Quickly deploy and manage real-time protection for endpoints, email, and employees - all from a single dashboard.

    Huntress Cybersecurity
    See Huntress in Action

    Quickly deploy and manage real-time protection for endpoints, email, and employees - all from a single dashboard.

    Huntress Cybersecurity
  • Threats We Stop
    Phishing
    Phishing
    Business Email Compromise
    Business Email Compromise
    Ransomware
    Ransomware
    Infostealers
    Infostealers
    View Allright arrowView Allright arrow
    Industries We Serve
    Education
    Education
    Financial Services
    Financial Services
    State and Local Government
    State and Local Government
    Healthcare
    Healthcare
    Law Firms
    Law Firms
    Manufacturing
    Manufacturing
    Utilities
    Utilities
    View Allright arrowView Allright arrow
    Tailored Solutions
    MSPs
    MSPs
    Resellers
    Resellers
    SMBs
    SMBs
    Compliance
    Compliance
    What Gets Overlooked Gets Exploited

    Most days, nothing happens. But one day, something will.

    Huntress Cybersecurity
    Cybercriminals Have Evolved

    Get the intel on today’s cybercriminal groups and learn how to protect yourself.

    Huntress Cybersecurity
  • Pricing
  • Community Series
    The Product Lab

    Shape the next big thing in cybersecurity together.

    The Product Lab

    Shape the next big thing in cybersecurity together.

    Fireside Chat

    Real people. Real perspectives. Better conversations.

    Fireside Chat

    Real people. Real perspectives. Better conversations.

    Tradecraft Tuesday

    No products, no pitches – just tradecraft.

    Tradecraft Tuesday

    No products, no pitches – just tradecraft.

    _declassified

    Exposing hidden truths in the world of cybersecurity.

    _declassified

    Exposing hidden truths in the world of cybersecurity.

    Resources
    Upcoming Events
    Upcoming Events
    Ebooks
    Ebooks
    On-Demand Webinars
    On-Demand Webinars
    Videos
    Videos
    Whitepapers
    Whitepapers
    Datasheets
    Datasheets
    Cybersecurity Education
    Cybersecurity 101
    Cybersecurity 101
    Cybersecurity Guides
    Cybersecurity Guides
    Threat Library
    Threat Library
    Real Tradecraft, Real Results
    Real Tradecraft, Real Results
    2026 Cyber Threat Report
    2026 Cyber Threat Report
    The Huntress Blog
    How Unified EDR and ITDR Stop Attacks Before They Spread
    Huntress Cybersecurity
    How Unified EDR and ITDR Stop Attacks Before They Spread
    Huntress Cybersecurity
    Codex Red: Untangling a Linux Incident With an OpenAI Twist (Part 2)
    Huntress Cybersecurity
    Codex Red: Untangling a Linux Incident With an OpenAI Twist (Part 2)
    Huntress Cybersecurity
    Attackers Didn’t Wait for AI. They Built Workflows Around It.
    Huntress Cybersecurity
    Attackers Didn’t Wait for AI. They Built Workflows Around It.
    Huntress Cybersecurity
  • Why Huntress

    Go beyond AI in the fight against today’s hackers with Huntress Managed EDR purpose-built for your needs

    Huntress Cybersecurity
    Why Huntress

    Go beyond AI in the fight against today’s hackers with Huntress Managed EDR purpose-built for your needs

    Huntress Cybersecurity
    The Huntress SOC

    24/7 Security Operations Center

    The Huntress SOC

    24/7 Security Operations Center

    Reviews

    Why businesses of all sizes trust Huntress to defend their assets

    Reviews

    Why businesses of all sizes trust Huntress to defend their assets

    Case Studies

    Learn directly from our partners how Huntress has helped them

    Case Studies

    Learn directly from our partners how Huntress has helped them

    Community

    Get in touch with the Huntress Community team

    Community

    Get in touch with the Huntress Community team

    Compare Huntress
    Bitdefender
    Bitdefender
    Blackpoint
    Blackpoint
    Breach Secure Now!
    Breach Secure Now!
    Crowdstrike
    Crowdstrike
    Datto
    Datto
    SentinelOne
    SentinelOne
    Sophos
    Sophos
    Compare Allright arrowCompare Allright arrow
  • HUNTRESS HUB

    Login to access top-notch marketing resources, tools, and training.

    Huntress Cybersecurity
    HUNTRESS HUB

    Login to access top-notch marketing resources, tools, and training.

    Huntress Cybersecurity
    Partners
    MSPs

    Join our partner community to deliver expert-led managed security.

    MSPs

    Join our partner community to deliver expert-led managed security.

    Resellers

    Partner program designed to grow your cybersecurity business.

    Resellers

    Partner program designed to grow your cybersecurity business.

    Tech Alliances

    Driving innovation through global technology Partnerships

    Tech Alliances

    Driving innovation through global technology Partnerships

    Microsoft Partnership

    A Level-Up for Your Business Security

    Microsoft Partnership

    A Level-Up for Your Business Security

  • Press Release
    Huntress Announces Collaboration with Microsoft to Strengthen Cybersecurity for Businesses of All Sizes
    Huntress Cybersecurity
    Press Release
    Huntress Announces Collaboration with Microsoft to Strengthen Cybersecurity for Businesses of All Sizes
    Huntress Cybersecurity
    Our Story

    We're on a mission to shatter the barriers to enterprise-level security.

    Our Story

    We're on a mission to shatter the barriers to enterprise-level security.

    Newsroom

    Explore press releases, news articles, media interviews and more.

    Newsroom

    Explore press releases, news articles, media interviews and more.

    Meet the Team

    Founded by former NSA Cyber Operators. Backed by security researchers.

    Meet the Team

    Founded by former NSA Cyber Operators. Backed by security researchers.

    Careers

    Ready to shake up the cybersecurity world? Join the hunt.

    Careers

    Ready to shake up the cybersecurity world? Join the hunt.

    Awards
    Awards
    Contact Us
    Contact Us
  • Portal Login
  • Support
  • Blog
  • Contact
  • Search
  • Get a Demo
  • Start for Free
Portal LoginSupportBlogContact
Search
Close search
Get a Demo
Start for Free
HomeThreat Library
WormGPT

What is a WormGPT?

Written by: Lizzie Danielson

Published: 4/24/2026

Computer Worm

WormGPT was a malicious generative AI tool launched in early 2023, designed to facilitate Business Email Compromise (BEC) and phishing attacks. By removing the ethical "guardrails" found in mainstream AI, it allowed threat actors to automate the creation of sophisticated, multilingual social engineering lures. Though the original project was shuttered in August 2023, it served as a proof-of-concept for the "jailbroken" AI era.

Key Takeaways

  • Purpose-Built for Crime: WormGPT was one of the first widely recognized commercialized "Dark LLMs," created to bypass the safety mechanisms found in mainstream tools like ChatGPT and Google Gemini.

  • The "Flawless" Phishing Threat: Its primary impact was the elimination of common red flags in phishing (such as poor grammar), allowing non-native speakers to produce professional-grade, persuasive BEC attacks.

  • Lowered Barrier to Entry: The tool democratized sophisticated cybercrime, enabling novice attackers with limited technical or linguistic skills to conduct high-level social engineering.

  • Project Shutdown (August 2023): Following heavy media scrutiny and security research reporting, the original developer (alias "Last") officially shut down the project. However, the brand name has since been co-opted by numerous copycat services and variants.

  • Defense Strategy: Protecting against AI-augmented attacks requires a shift toward behavioral email security, "out-of-band" verification for financial requests, and updated employee training that de-emphasizes grammar as a marker of trust.

What is WormGPT and where did it come from?

WormGPT surfaced in March 2023 on popular cybercrime forums and was widely reported on by security researchers by July of that year. The tool was developed by a threat actor using the alias "Last" who marketed it as "the blackhat alternative to GPT models." It was offered as a subscription-based service — reportedly around €60–€100 per month or €550 per year — giving paying cybercriminals access through a private web interface.

The tool is believed to be built on GPT-J, an open-source large language model (LLM) originally developed by EleutherAI in 2021. GPT-J is a legitimate, publicly available model; but the WormGPT developer fine-tuned and retrained it on datasets related to malware development, phishing techniques, and other attack methodologies. The result was a generative AI chatbot that had no content moderation, no ethical boundaries, and no restrictions on the type of output it would produce.

By late summer 2023, the original developer reportedly shut down the public-facing version of WormGPT, citing concerns about the media attention and potential legal scrutiny. However, cybersecurity researchers have noted that variants, copycats, and successor tools (such as FraudGPT, DarkBARD, and Evil-GPT) quickly filled the void on dark web marketplaces. The underlying concept — weaponizing generative AI for cybercrime — is very much alive.

The FBI has publicly warned about the increasing use of AI in cybercrime schemes, and CISA has issued guidance emphasizing the need for organizations to adapt their defenses as AI-powered threats evolve.

How WormGPT works

At its core, WormGPT functions much like any other generative AI chatbot. A user types in a prompt — a question or instruction — and the model generates a text-based response. The critical difference is what it's willing to generate.

Mainstream AI tools like ChatGPT, Google Gemini, and Microsoft Copilot have extensive content policies and safety filters built in. Ask ChatGPT to write a phishing email, and it will refuse. Ask it to generate ransomware code, and it will decline. These guardrails exist because the companies behind those tools have invested heavily in responsible AI development.

WormGPT was built to have none of those restrictions. Its training data was deliberately curated to include:

  • Malware source code and development techniques

  • Phishing email templates and social engineering scripts

  • Exploit documentation and vulnerability information

  • Data related to business email compromise tactics and fraud schemes

When a cybercriminal gives WormGPT a prompt like "Write a convincing email from a CEO to a finance manager requesting an urgent wire transfer," the tool generates polished, persuasive, contextually appropriate text — complete with the professional tone, urgency cues, and formatting that make BEC attacks so effective.

The model also supports multiple languages, which is significant. Historically, one of the telltale signs of a phishing email was poor grammar or awkward phrasing — often because the attacker was not a native speaker of the target's language. WormGPT eliminates that indicator almost entirely, producing fluent content in English, Spanish, French, German, and other languages.

How bad threat actors use WormGPT

WormGPT's capabilities make it a versatile tool for a range of cyberattacks. Here are the most common use cases security researchers have identified:

1. Business Email Compromise (BEC)

BEC is one of the most financially damaging forms of cybercrime. According to the FBI's Internet Crime Complaint Center (IC3), BEC attacks accounted for over $2.9 billion in reported losses in 2023 alone.

WormGPT excels at generating the type of carefully worded, authoritative emails that power these scams. It can impersonate executives, vendors, attorneys, or other trusted figures — crafting messages that pressure victims into transferring funds, sharing credentials, or divulging sensitive information.

2. Phishing and Spear Phishing

Beyond BEC, WormGPT can generate phishing emails at scale. Attackers can use it to:

  • Create highly personalized spear phishing emails targeting specific individuals based on publicly available information (from LinkedIn profiles, company websites, etc.)

  • Produce mass phishing campaigns with varied language so that no two emails are identical, making them more difficult for email security tools to catch using signature-based detection

  • Write convincing pretexting scenarios for vishing or smishing (SMS phishing) scripts

3. Malware and Exploit Development

While phishing is its primary use case, WormGPT can also assist threat actors in writing malicious code. Researchers have demonstrated the tool generating:

  • Python-based malware scripts

  • Code for data exfiltration tools

  • Scripts designed to exploit known vulnerabilities

This capability is particularly concerning because it allows less technically skilled attackers to produce functional malicious tools they might otherwise lack the expertise to develop from scratch.

4. Social Engineering at Scale

WormGPT enables attackers to rapidly produce variations of social engineering content, including fake customer service scripts, fraudulent website copy, and deceptive social media messages. The speed and quality of output mean a single attacker can run campaigns that would previously have required a team.

WormGPT vs. ChatGPT: What's the difference?

Understanding the distinction between WormGPT and legitimate AI tools is important for cybersecurity professionals educating their organizations.

Feature

ChatGPT (and similar tools)

WormGPT

Developer

OpenAI (legitimate company)

Anonymous threat actor ("Last")

Ethical guardrails

Extensive content policies and safety filters

None — no restrictions on output

Training data

Broad internet text, curated and filtered

Supplemented with malware, phishing, and exploit data

Intended use

Productivity, education, creative work

Cybercrime — phishing, BEC, malware generation

Availability

Public (free and paid tiers)

Dark web forums, subscription-based

Content moderation

Active — refuses harmful requests

None — fulfills any request

Language support

Multilingual

Multilingual

Legal status

Legal

Illegal (facilitates criminal activity)

It is worth noting that threat actors also attempt to "jailbreak" legitimate AI tools — using carefully crafted prompts to trick ChatGPT or similar platforms into bypassing their safety filters. While companies like OpenAI continuously patch these jailbreak methods, it remains an ongoing cat-and-mouse game. WormGPT sidesteps this entirely by simply having no filters to bypass.

Real-world risks and impact

The emergence of WormGPT and the wave of malicious AI tools that followed represents a meaningful shift in the cybersecurity threat landscape. Here's why security professionals should take it seriously:

The Democratization of Cybercrime

Before tools like WormGPT, executing a convincing BEC attack or writing functional malware required a certain level of skill. The attacker needed to understand social engineering psychology, write persuasively in the target's language, or possess coding expertise. WormGPT collapses those barriers. An attacker with minimal skills and a subscription fee can now produce enterprise-grade social engineering content in seconds.

Volume and Speed

Generative AI doesn't just improve attack quality it dramatically increases the speed and volume at which attacks can be produced. A single threat actor using WormGPT can generate hundreds of unique, polished phishing emails in the time it previously took to write one. These scalability challenges traditional email security defenses that rely on identifying repeated patterns or known malicious templates.

Erosion of Traditional Red Flags

For years, cybersecurity awareness training has taught employees to look for common phishing indicators: misspellings, awkward grammar, generic greetings, and implausible scenarios. AI-generated phishing content can eliminate most of these telltale signs, making it significantly harder for humans to distinguish malicious emails from legitimate ones by visual inspection alone.

The Ecosystem Effect

WormGPT was just the beginning. In the months following its appearance, security researchers documented a growing ecosystem of malicious AI tools:

  • FraudGPT — marketed for creating phishing pages, scam emails, and cracking tools

  • DarkBARD — positioned as a dark web version of Google's Bard

  • Evil-GPT — another ChatGPT alternative without restrictions

  • PoisonGPT — designed to spread disinformation

This proliferation suggests that malicious AI is now a permanent fixture of the cybercrime landscape, not a passing trend.

Defending against WormGPT-powered attacks

For cybersecurity professionals, defending against AI-generated threats requires evolving both technology and training. Here are practical strategies:

1. Deploy AI-Powered Email Security

Traditional email gateways that rely on known signatures and blocklists are increasingly insufficient against AI-generated content. Modern email security solutions use machine learning and natural language processing (NLP) to analyze the intent, tone, and context of messages detecting anomalies even when the content is grammatically perfect and visually convincing.

2. Upgrade Security Awareness Training

Employee security awareness training programs need to be updated to reflect the new reality of AI-powered phishing. Key updates include:

  • Teaching employees that perfect grammar and professional tone are no longer reliable indicators of legitimacy

  • Emphasizing procedural verification — always confirming unusual requests (especially financial ones) through a separate, trusted communication channel

  • Running simulated phishing exercises that include AI-generated content to give employees realistic practice

3. Implement Strong Verification Processes

For BEC defense specifically, process is your best friend:

  • Require multi-person approval for wire transfers and changes to payment information

  • Establish verbal or in-person confirmation protocols for high-value requests

  • Use code words or challenge phrases for sensitive transactions

4. Adopt a Layered Security Posture

No single tool or technique will stop AI-powered attacks. A defense-in-depth approach should include:

  • Email authentication protocols (SPF, DKIM, and DMARC) to reduce email spoofing

  • Multi-factor authentication (MFA) on all accounts to limit the damage from stolen credentials

  • Endpoint detection and response (EDR) to catch malware that may arrive via AI-crafted phishing

  • Managed security services to provide 24/7 monitoring and rapid response

5. Stay Current on Threat Intelligence

The malicious AI landscape evolves quickly. Cybersecurity teams should actively monitor threat intelligence feeds, participate in information sharing organizations like ISACs, and stay current on advisories from organizations like CISA and the FBI.

Bigger picture: AI and the future of cybercrime

WormGPT is a symptom of a larger trend: the weaponization of artificial intelligence. As AI models become more powerful and more accessible and as open-source models make it easier for anyone to fine-tune a language model on custom data the barrier to creating tools like WormGPT will continue to drop.

This doesn't mean the situation is hopeless. The same AI capabilities that power offensive tools are also being deployed on the defensive side. AI-driven threat detection, automated incident response, behavioral analytics, and intelligent email filtering are all advancing rapidly.

The National Institute of Standards and Technology (NIST) is actively developing frameworks for AI risk management, and the White House Executive Order on Safe, Secure, and Trustworthy AI (October 2023) underscores the federal government's recognition that AI safety — including its implications for cybersecurity — is a national priority.

For cybersecurity professionals, the takeaway is clear: AI is now a tool in the adversary's toolkit, and your defenses need to account for that reality. Understanding tools like WormGPT isn't about fearmongering it's about being informed so you can protect your organization with the right combination of technology, training, and process.

Frequently Asked Questions

The original version of WormGPT was reportedly shut down by its developer in August 2023 due to media attention. However, variants, clones, and successor tools (such as FraudGPT and Evil-GPT) continue to circulate on dark web forums. The threat model WormGPT represents — unrestricted AI used for cybercrime — remains very much active.

WormGPT and similar tools are typically sold on dark web marketplaces and underground hacking forums as subscription services. They are not available through legitimate app stores or websites. Attempting to access, purchase, or use WormGPT is illegal and constitutes participation in cybercriminal activity.

ChatGPT and other mainstream AI tools have robust safety filters designed to prevent misuse. While threat actors occasionally find temporary "jailbreak" techniques to bypass these filters, the companies behind these tools actively patch such exploits. WormGPT was purpose-built without any such restrictions, making it fundamentally different in its intended use.

WormGPT produces emails that are grammatically flawless, contextually relevant, professionally toned, and highly personalized. This eliminates many of the traditional red flags (typos, awkward phrasing, generic content) that employees are trained to spot, making the emails significantly harder to identify as fraudulent.

The most effective approach combines AI-powered email security tools that can detect sophisticated social engineering with updated employee awareness training that reflects the new threat landscape. Additionally, implementing strong verification procedures for financial transactions, enforcing multi-factor authentication, and maintaining a layered security posture with endpoint detection and managed monitoring services are all critical defenses.

Glitch effectBlurry glitch effect

Protect What Matters

Secure endpoints, email, and employees with the power of our 24/7 SOC. Try Huntress for free and deploy in minutes to start fighting threats.
Start for Free
Huntress Managed Security PlatformManaged EDRManaged EDR for macOSManaged EDR for LinuxManaged ITDRManaged SIEMManaged Security Awareness TrainingManaged ISPMManaged ESPMBook a Demo
PhishingComplianceBusiness Email CompromiseEducationFinanceHealthcareManufacturingState & Local Government
Managed Service ProvidersResellersIT & Security Teams24/7 SOCCase Studies
BlogResource CenterCybersecurity 101Upcoming EventsSupport Documentation
Our CompanyLeadershipNews & PressCareersContact Us
Huntress white logo

Protecting 242k+ customers like you with enterprise-grade protection.

Privacy PolicyCookie PolicyTerms of UseCookie Consent
Linkedin iconTwitter X iconYouTube iconInstagram icon
© 2025 Huntress All Rights Reserved.

Join the Hunt

Get insider access to Huntress tradecraft, killer events, and the freshest blog updates.

By submitting this form, you accept our Terms of Service & Privacy Policy