AI poisoning isn't one technique. It's a category with three distinct forms that operate at different points in how AI systems work.
Training data poisoning
This happens before most users ever interact with an AI. Machine learning models learn from large datasets, and if an attacker can influence that dataset, they can teach the model to behave incorrectly in specific, targeted situations.
Inject false medical records into a healthcare AI's training data, and it misclassifies diagnoses. Feed manipulated network traffic logs to a security AI, and it learns to misinterpret certain potential attacks. The model believes it's working correctly. It's been guided to have blind spots.
This can be a slower, more sophisticated attack—but the damage compounds with every decision the compromised model makes.
AI search result poisoning
This is the type most everyday users and employees are running into right now—and it requires no technical sophistication at all.
AI assistants like Google's AI Overviews, ChatGPT, and Copilot generate answers by pulling from web content they can find and index. Attackers exploit this by seeding the web with misleading pages: fake customer support numbers that connect to scam call centers, malicious download links disguised as legitimate software, and fabricated how-to instructions that execute an attack.
The user doesn't click a suspicious link. The AI just tells them the wrong thing—confidently, in plain language, with no URL to second-guess.
Security researcher Bruce Schneier demonstrated this in under 24 hours: by publishing a single fabricated article on a personal website, he had both Google AI Overviews and ChatGPT repeating the invented "facts" as truth to anyone who asked.
When an AI agent operates autonomously by browsing the web, summarizing documents, executing workflows—attackers can embed hidden instructions inside content the agent processes. Those instructions redirect the agent's behavior: exfiltrate data, bypass security rules, and take actions the user never authorized.
Memory poisoning extends this further. AI agents that retain context across sessions can have false information injected into that memory, corrupting every decision they make going forward.