What is AI phishing?
Phishing has been around for decades. The basic playbook has always been the same: send someone a fake message, make it look real enough to fool them, and wait for them to hand over credentials, click a bad link, or wire funds to the wrong account.
For a long time, phishing attacks were easy to spot if you knew what to look for. Awkward grammar, misspelled domain names, and generic greetings like "Dear Customer" were tell-tale signs. Security training taught people to pause before clicking, and most organizations felt reasonably confident their employees could sniff out a scam.
Then generative AI changed everything.
AI phishing, sometimes called generative AI phishing or AI-powered phishing, is what happens when attackers take the same old playbook and supercharge it with large language models (LLMs), voice cloning tools, and AI-generated imagery. The result? Messages that read like they came from a trusted colleague. Voicemails that sound exactly like your CEO. Emails that reference your name, your company, your recent projects, and your actual relationships — because AI scraped that information from LinkedIn, your company website, or a data breach.
See how easy it was for the Huntress team to create a deepfake of CEO, Kyle Hanslovan during a Tradecraft Tuesday.
The volume has exploded, too. Generative AI allows cybercriminals to produce thousands of highly customized phishing messages in the time it used to take to write one. That means more attacks, targeting more people, with far less effort on the attacker's end.