Skip to content

AI-Powered Phishing: How LLMs Help Attackers Write Better Lures

AI-powered phishing - LLM neural network generating targeted phishing emails to multiple victims

A phishing email arrives in your inbox. It references a project you’re working on, names your manager correctly, mimics the writing style of your IT department, and asks you to verify your credentials after a “suspicious login from São Paulo.” No typos. No awkward phrasing. No generic “Dear Customer” greeting. It reads exactly like a legitimate message from your company.

Two years ago, writing this email required a human attacker who spent hours researching your organization, your role, and your communication patterns. Today, an LLM produces it in seconds. Feed it a few LinkedIn profiles and a sample company email, and it generates dozens of personalized variants, each tailored to a different target, in any language.

This is why traditional phishing detection advice about spotting grammatical errors and suspicious formatting is becoming unreliable. The signals employees were trained to look for are disappearing.

AI-powered phishing is the use of large language models and generative AI tools to create, personalize, and scale phishing attacks. Attackers use LLMs to draft convincing email copy, clone writing styles, generate pretexts tailored to specific targets, and translate lures into any language without the errors that previously served as detection signals. According to the 2025 Verizon Data Breach Investigations Report, phishing remained the initial attack vector in 36% of breaches. SlashNext’s 2025 State of Phishing report found a 4,151% increase in AI-generated phishing messages since the public release of ChatGPT, with AI-crafted emails showing click-through rates 14 times higher than traditional mass-produced phishing. The quality improvement isn’t incremental. It’s a structural shift in how phishing operations work, reducing the skill and time required to produce attacks that pass both human scrutiny and automated email filters.

How do attackers use LLMs to craft phishing emails?

Section titled “How do attackers use LLMs to craft phishing emails?”

The most immediate impact is quality. Before LLMs, phishing campaigns divided into two tiers. High-effort spear phishing targeted specific individuals with researched, well-written lures. Mass phishing blasted generic templates to thousands of addresses, relying on volume over quality. LLMs collapsed this divide.

An attacker with access to any commercially available LLM can now produce spear-phishing-quality emails at mass-phishing scale. The workflow looks like this:

Reconnaissance. The attacker scrapes the target organization’s website, LinkedIn profiles, press releases, and job postings. This gives them names, roles, projects, terminology, and organizational structure.

Prompt construction. They feed this context to an LLM with instructions like: “Write an email from the IT security team at [Company] to [Employee Name], referencing the [Project Name] migration, requesting credential verification. Match corporate communication style. Include urgency but not pressure.”

Variant generation. The same prompt generates unique emails for every employee in a department. Each email references the recipient’s actual role and projects. No two emails are identical, which defeats signature-based email filters that look for duplicate content across messages.

Language adaptation. For multinational targets, the attacker generates localized versions. The German office gets native German. The Tokyo branch gets natural Japanese. No awkward machine translation artifacts.

Iteration. If initial emails don’t generate clicks, the attacker rephrases the prompt and generates new variants in minutes. A/B testing phishing campaigns became trivial.

This workflow doesn’t require custom models or technical sophistication. It works with off-the-shelf LLMs, many of which have weak enough safety filters to produce convincing pretexts when prompted indirectly.

Why are AI phishing emails harder to detect?

Section titled “Why are AI phishing emails harder to detect?”

Employees have been trained for years to look for specific indicators: spelling mistakes, grammatical errors, generic greetings, awkward phrasing, mismatched sender domains. These signals worked when most phishing emails were written by non-native speakers using templates.

LLM-generated phishing eliminates most of these signals:

No language errors. LLMs produce grammatically correct text in any language. The “Nigerian prince” era of broken English is over for any attacker with access to an AI model.

Contextual accuracy. When fed reconnaissance data, LLMs reference real projects, real people, and real company events. The email doesn’t feel like it came from outside the organization.

Style matching. LLMs can mimic formal corporate communication, casual Slack-style messages, or technical IT notifications. When the attacker provides sample communications, the model matches tone, vocabulary, and structure closely enough to pass casual inspection.

Unique content. Each generated email is linguistically unique. Email security tools that rely on pattern matching across messages won’t flag them because there’s no pattern to match. The content resembles legitimate business communication rather than a mass campaign.

Emotional calibration. LLMs can tune the urgency level precisely. Not “YOUR ACCOUNT WILL BE DELETED” all-caps panic, but “we noticed some unusual activity and wanted to confirm it was you.” Professional, measured, and more believable.

This doesn’t mean detection is impossible. It means that the detection methods employees have relied on for a decade need updating. The Phishing Detection guide still provides useful frameworks, but the emphasis has shifted from spotting errors to verifying requests through independent channels.

How does AI phishing overlap with business email compromise?

Section titled “How does AI phishing overlap with business email compromise?”

Business email compromise (BEC) was already the costliest form of email fraud before AI tools entered the picture. The FBI’s Internet Crime Complaint Center reported $2.9 billion in BEC losses in 2023. LLMs make BEC attacks easier to execute and harder to stop.

Traditional BEC requires an attacker to compromise or spoof an executive’s email account and then write a convincing message to the finance team. The writing step was the bottleneck. Impersonating a CEO’s communication style convincingly enough to trigger a wire transfer required studying how the executive writes.

LLMs remove that bottleneck. Feed the model a few samples of the CEO’s emails (available from past compromises, public statements, or social media posts) and it produces messages that match the executive’s voice. Short, direct emails for CEOs known for brevity. Detailed, structured messages for executives who write long-form.

The combination becomes more dangerous when paired with deepfake voice cloning. The AI-written email creates the initial pretext. A follow-up phone call using the executive’s cloned voice confirms the request. The finance team sees a written request and hears verbal confirmation from what sounds like their boss.

For a hands-on look at this attack chain, walk through the Business Email Compromise exercise and the OneNote Email Attack case study to see how BEC unfolds in real scenarios.

What role does personalization at scale play?

Section titled “What role does personalization at scale play?”

The defining advantage of AI phishing isn’t quality or speed alone. It’s the ability to personalize at scale.

Before LLMs, personalization required manual effort. An attacker could write a personalized email to ten targets per day if they were fast. Scaling required sacrificing personalization, which is why mass phishing campaigns used generic templates.

Now an attacker generates 10,000 personalized emails in an afternoon. Each one references the recipient’s role, department, recent company news, and relevant projects. The attacker doesn’t even need to read the reconnaissance data manually. They feed the raw data to the LLM and let it extract relevant personalization details automatically.

This creates a problem for security teams. Phishing simulations and training programs typically teach employees to distrust generic messages. But when every phishing email is personalized, “Is this message generic?” stops being a useful filter.

What still works as a detection signal:

Unusual requests. The content may be perfectly written, but the request itself is abnormal. A “CEO” asking for gift cards. An “IT team” requesting passwords via email. A “vendor” changing bank details. The behavioral red flags survive even when linguistic red flags disappear.

Urgency pressure. AI-generated or not, phishing emails still rely on creating time pressure to prevent verification. “Please process this before end of day.” “This needs immediate attention.” The urgency is a feature of the attack, not a flaw the attacker will optimize away.

Out-of-band verification. When in doubt, contact the sender through a separate channel. Call them on a known number. Walk to their desk. Message them on a different platform. This single habit defeats the entire AI-personalization advantage.

Our phishing simulation training guide covers how organizations can build exercises that test for these behavioral signals rather than relying on employees to spot linguistic errors.

How are attackers using AI for multi-channel phishing?

Section titled “How are attackers using AI for multi-channel phishing?”

Phishing is no longer an email-only threat. LLMs enable attackers to run coordinated campaigns across multiple channels.

Email plus SMS. The attacker sends a professional phishing email, then follows up with a smishing message that references the email: “Did you see the security alert from IT? Here’s the direct link to verify your account.” The SMS reinforces the email’s legitimacy.

Email plus voice. After the phishing email lands, a vishing call follows. The caller (potentially using a cloned voice) references the email and adds verbal pressure. Callback phishing (TOAD) combines email and phone inherently, with the email directing the target to call a fake support number.

LinkedIn plus email. An attacker creates a fake LinkedIn profile using AI-generated content and images, connects with targets at the organization, then sends phishing emails that reference the LinkedIn connection. The target checks LinkedIn, sees a plausible profile, and trusts the email.

Slack and Teams. In organizations with compromised credentials, attackers use AI to generate internal messages that match the company’s communication culture. A well-crafted message in a #general Slack channel from a “new hire” can distribute malicious links to hundreds of employees simultaneously.

Each channel reinforces the others. When the email, the text, and the phone call all tell the same story, most people stop questioning it.

What makes executive targeting with AI phishing different?

Section titled “What makes executive targeting with AI phishing different?”

Whaling attacks (phishing that specifically targets executives) benefit disproportionately from AI tools. Executives have large public footprints: conference talks, press interviews, social media posts, SEC filings, board memberships. All of this feeds the LLM’s personalization engine.

An AI-crafted whaling email to a CFO might reference a recent earnings call, mention a specific acquisition target that appeared in trade press, and request a “confidential” wire transfer to a “new counsel” for the deal. The email uses the board chair’s name, references their last meeting, and matches the communication style the CFO expects from that person.

The Barrel Phishing technique is particularly effective against executives when combined with AI. The first email is benign (an introduction, a scheduling request), establishing the sender as legitimate. The second email contains the payload. LLMs make generating this two-step sequence trivial, and each email reads as professionally as any real executive communication.

How should organizations adapt their training?

Section titled “How should organizations adapt their training?”

If your security awareness training program still focuses primarily on “spot the typo” exercises, it’s training employees for yesterday’s phishing landscape.

Effective training against AI phishing emphasizes behavior, not inspection:

Verify before acting. Teach employees to verify unusual requests through a separate communication channel. Every time. Even when the email looks perfect. Especially when the email looks perfect.

Question the request, not the writing. Shift training from “Does this email look suspicious?” to “Is this request something I should fulfill without independent confirmation?” A perfect email asking for credentials is still suspicious if you wouldn’t normally receive that request by email.

Simulate realistic attacks. Phishing simulations using template-based lures don’t prepare employees for AI-generated attacks. Simulations need to match the quality and personalization employees will face in real attacks.

Train for multi-channel. Employees need to recognize that a phishing campaign might touch their email, phone, SMS, and social media. Receiving the “same” request across multiple channels doesn’t make it more legitimate. It might mean a coordinated attack.

Update frequently. AI phishing techniques evolve faster than annual training cycles. Monthly training keeps teams aware of current tactics rather than outdated patterns.

The AI-Powered Phishing exercise lets employees interact with realistic AI-generated phishing scenarios where the traditional red flags have been deliberately removed. It builds the habit of verifying requests rather than inspecting grammar.


Explore our Security Awareness training catalogue for phishing exercises, or visit the AI Security catalogue for hands-on training on LLM-specific risks including prompt injection and AI chatbot manipulation.