Key Highlights:
- Threat intelligence shows over 80% of phishing emails use AI-generated content.
- Polymorphic attacks change constantly, breaking signature-based security systems.
- Experts say behavioural detection and zero trust models are now critical.
Phishing attacks have entered a new phase. A cybersecurity expert warns that AI has made modern phishing attacks nearly impossible to detect using traditional tools. Threat intelligence shows most phishing emails now rely on AI-generated language, forcing organisations to rethink how they defend email systems.
The warning comes as attackers increasingly use large language models to create realistic, localised, and constantly changing phishing messages. These emails look legitimate, sound professional, and arrive at massive scale. According to recent analysis, this shift has already broken many existing security controls.
At the centre of this assessment is Danny Mitchell, a cybersecurity writer at Heimdal Security. Mitchell has been tracking how AI-driven phishing has evolved and why awareness training and signature-based detection no longer work on their own.
What makes AI-driven phishing attacks different
A decade ago, phishing emails were easy to spot. They often contained spelling errors, strange formatting, or generic greetings. That visual noise acted as an early warning system for users and filters alike. That safety net is gone.
Today’s phishing attacks are clean, well-written, and context-aware. AI tools can mimic the tone of a real colleague, match a company’s writing style, and reference current projects or events. The result is an email that feels routine instead of suspicious.
Mitchell says this shift happened almost overnight.
The rapid rise of AI-generated language
Large language models became widely available in late 2022 and early 2023. Within months, threat actors began using them as offensive tools.
“What we’re seeing now is radically different from anything we’ve dealt with before,” Mitchell explains. “AI helps attackers write better emails and allows them to produce thousands of variations simultaneously, each one tailored to specific targets, industries, or even individuals.”
This capability changed phishing at a structural level. Instead of recycling templates, attackers now generate unique emails on demand. Each message looks authentic and avoids repetition.
AI enables three critical advantages for attackers. First is linguistic realism. Second is instant localisation across languages and regions. Third is continuous variation that defeats pattern-based detection.
Why scale now favors attackers
Before AI, crafting phishing emails required time and effort. A human attacker could only produce a limited number of messages per day. That constraint no longer exists.
AI systems can generate thousands of emails per hour. Each version can differ slightly in wording, structure, and intent. This creates a flood of unique messages that overwhelm filters.
“An AI model can generate a convincing email in flawless English, German, Japanese, or any other language,” says Mitchell. “It can adopt the writing style of a specific company, reference current events, and include contextual details that make the message feel legitimate.”
This scale makes phishing attacks cheaper, faster, and more efficient than ever before.
What are polymorphic phishing attacks
Polymorphic phishing refers to attacks that constantly change form. No two emails are exactly the same. Subject lines vary. Language shifts. Links and attachments rotate.
This directly undermines how most security systems work.
“Traditional security systems look for known threat signatures,” Mitchell notes. “But if every phishing email is unique, there’s no signature to detect. The attack adapts faster than defences can respond.”
Threat intelligence supports this claim. More than 90% of polymorphic phishing campaigns now leverage large language models. These attacks mutate faster than blocklists can update.
Attackers also use AI to scrape public data. Social media posts, company websites, and leaked databases provide personal details. That information fuels hyper-personalised phishing emails that feel familiar and trustworthy.
Why traditional defences are failing against Phishing attacks
Most organisations still rely on three core defences. Spam filters. Signature-based detection. Employee awareness training. These tools worked when phishing was crude. They struggle in the AI era.
Signature-based detection cannot keep up
Signature-based systems depend on known patterns. Once a phishing email is identified, its characteristics are added to a blocklist. Future emails matching that pattern get blocked.
AI breaks this cycle.
“We’re fighting a reactive battle,” Mitchell says. “By the time a new phishing template is identified and added to blocklists, attackers have already moved on to hundreds of new variations. The system can’t keep up.”
AI-generated emails often resemble legitimate communication so closely that even advanced filters let them through.
Human error remains a major risk
Training teaches employees to look for red flags. Misspellings. Generic greetings. Urgent language. Those lessons still matter, but they no longer cover the full threat.
“The problem is that modern phishing emails don’t contain those obvious mistakes anymore,” Mitchell explains. “They’re grammatically perfect, appropriately formatted, and contextually relevant. Even trained security professionals can be fooled, especially when they’re busy or distracted.”
This makes phishing attacks harder to spot during normal work routines.
Trust in email is being exploited
Email remains a trusted business tool. People expect messages from colleagues, partners, and vendors. That expectation creates an opening attackers exploit.
“Once someone believes an email is legitimate, they’re far less likely to question it,” says Mitchell. “AI-generated phishing leverages that psychology. The emails feel right, they match expectations, and that’s precisely what makes them dangerous.”
How organisations should adapt their security strategy
Experts agree that no single defence can stop AI-driven phishing. Organisations must shift toward adaptive, layered security models.
Behavioural detection over content scanning
Behavioural detection focuses on how emails behave, not just what they say. Systems analyse sender patterns, timing, and workflow consistency.
“Behavioural detection focuses on context rather than content,” Mitchell notes. “If your finance director suddenly emails asking for an urgent wire transfer at 11 p.m., the system flags it because the behaviour is inconsistent with established patterns.”
This approach catches anomalies even when the email text looks legitimate.
Why layered security is essential
Layered security adds redundancy. Email filtering works alongside endpoint protection, network monitoring, and identity controls.
“If one layer fails, others can catch the threat,” Mitchell explains. “It’s about creating redundancy so that no single point of failure compromises your entire security posture.”
This reduces the impact of inevitable breaches.
Zero trust is no longer optional
Zero trust architecture assumes no user or device is automatically safe. Every access request must be verified.
“Every access request should be verified,” Mitchell says. “Authentication should be multi-factor. Permissions should be limited to what’s necessary for each role.”
This limits how far attackers can move even after a successful phishing attempt.
What individuals can do right now
Individuals also play a role in defence. Verification remains one of the most effective countermeasures.
If an email asks for sensitive information or urgent action, verify it through another channel. Call the sender. Use a known messaging platform. Do not reply directly.
“Phishing attacks rely on urgency and trust,” Mitchell explains. “Taking a moment to verify breaks that momentum. It’s a simple step, but it’s remarkably effective.”
What comes next for AI-driven phishing
Mitchell warns that phishing will continue to evolve.
“AI-driven phishing is only going to get more sophisticated. We’re already seeing attackers experiment with voice cloning and video deepfakes to supplement email campaigns.
“The next frontier will likely involve real-time interaction, such as AI chatbots that can respond to questions and adapt their approach mid-conversation. We need to be prepared. Organisations that invest in adaptive security measures now will be far better positioned to handle whatever comes next.
“The key is accepting that perfect prevention is impossible. Instead, focus on detection, rapid response, and building a security culture where verification is automatic, not an afterthought.”