Skip to content

phishing

3 posts with the tag “phishing”

Social Engineering Attacks: How Hackers Exploit Human Psychology

Social engineering attacks - puppet strings representing psychological manipulation

A hacker doesn’t need to crack your encryption. They just need to convince one employee to help them.

Social engineering attacks exploit human psychology instead of technical vulnerabilities. While your security team patches software and monitors networks, attackers study your organization chart, LinkedIn profiles, and even your company’s Glassdoor reviews, looking for ways to manipulate the humans behind your defenses.

These attacks work because they target something no firewall can protect: the natural human tendencies to trust, help, and comply with authority.

Traditional hacking targets systems. Social engineering targets people.

Technical AttackSocial Engineering Attack
Exploits software vulnerabilityExploits human trust
Blocked by security toolsBypasses security tools
Requires technical skillRequires psychological skill
Can be patchedCan’t be “patched”
Detected by automated systemsOften undetected

The most sophisticated security infrastructure becomes worthless when an employee willingly provides credentials, disables controls, or transfers funds because a convincing attacker asked them to.

Social engineers don’t use mind control. They leverage well-documented cognitive biases that affect everyone:

People comply with perceived authority figures. An email appearing to come from the CEO requesting an urgent wire transfer works because employees are conditioned to follow executive directives without questioning.

Time pressure short-circuits rational analysis. “Your account will be locked in 30 minutes” or “This deal closes today” creates panic that overrides caution.

When someone does something for us, we feel obligated to return the favor. An attacker who “helps” with a fake IT issue may ask for credentials in return.

We assume actions are correct if others are doing them. “Everyone in your department has already updated their credentials” makes compliance feel normal.

We’re more likely to comply with requests from people we like. Attackers build rapport, find common interests, and mirror communication styles to create artificial trust.

The most common attack vector. Fraudulent emails impersonate trusted entities (banks, vendors, colleagues) to steal credentials or deploy malware.

How it works:

  1. Attacker researches target organization
  2. Creates convincing email mimicking trusted sender
  3. Includes malicious link or attachment
  4. Victim clicks, providing credentials or installing malware

Real example: In 2020, Twitter employees received calls from attackers posing as internal IT support. The callers directed employees to a phishing site that captured their credentials, leading to the compromise of high-profile accounts including Barack Obama and Elon Musk.

Targeted phishing focused on specific individuals, using personal information to increase credibility.

Key differences from generic phishing:

  • References specific projects, colleagues, or recent activities
  • Appears to come from known contacts
  • Contains accurate organizational details
  • Tailored to victim’s role and responsibilities

Spear phishing targeting executives (“whales”) with access to significant funds or sensitive decisions.

Real example: In 2016, FACC, an Austrian aerospace company, lost €50 million when attackers convinced finance staff that the CEO had authorized emergency wire transfers for a confidential acquisition. Both the CEO and CFO were fired.

Phone-based attacks where callers impersonate IT support, executives, government officials, or other trusted entities.

Common pretexts:

  • “IT helpdesk calling about a security issue”
  • “This is HR verifying your benefits information”
  • “Your bank’s fraud department has detected suspicious activity”

Text message attacks leveraging the immediacy and perceived legitimacy of SMS.

Why it’s effective:

  • People trust text messages more than email
  • Mobile screens hide suspicious URL details
  • SMS feels more personal and urgent
  • Links can appear as shortened URLs

Creating a fabricated scenario to establish trust before making the actual request.

Example scenario: An attacker calls reception claiming to be from the IT department. They explain they’re troubleshooting an issue affecting several departments and need to verify some information. After building rapport over several calls about “resolving” the fake issue, they request credentials to “complete the fix.”

Using physical or digital “bait” to deliver malware or capture credentials.

Physical baiting: Leaving infected USB drives in parking lots, lobbies, or conference rooms labeled “Payroll” or “Confidential”

Digital baiting: Offering free software, games, or media that contains malware

Gaining physical access by following authorized personnel through secured doors.

How it works: An attacker carrying boxes approaches a badge-protected door just as an employee exits. Social convention makes it awkward to demand credentials from someone who appears to belong, so the employee holds the door.

Attackers sent phishing emails to small groups of RSA employees with the subject “2011 Recruitment Plan” containing a malicious Excel file. One employee retrieved the email from their junk folder and opened it.

Result: Attackers gained access to RSA’s SecurID authentication system, ultimately affecting defense contractors and government agencies using RSA tokens.

Lesson: Technical controls (spam filtering) worked, but human curiosity defeated them.

Attackers used spear phishing emails targeting Sony executives with messages appearing to come from Apple about ID verification.

Result: Massive data breach exposing unreleased films, employee data, executive emails, and confidential business information. Estimated cost: $100+ million.

Lesson: Even tech-savvy organizations are vulnerable to well-crafted social engineering.

Attackers impersonated executives in emails requesting wire transfers to overseas accounts for a supposed acquisition.

Result: $46.7 million stolen. Some funds recovered, but significant losses remained.

Lesson: Email-based wire transfer requests require out-of-band verification regardless of apparent sender.

Warning Signs of Social Engineering Attempts

Section titled “Warning Signs of Social Engineering Attempts”

Train employees to recognize these red flags:

  • Sender address doesn’t match claimed identity
  • Unusual urgency or time pressure
  • Requests for sensitive information or unusual actions
  • Grammar and formatting inconsistent with sender’s normal style
  • Links that don’t match expected destinations (hover to check)
  • Unsolicited contact requesting sensitive information
  • Pressure to act immediately
  • Resistance to callback verification
  • Requests to bypass normal procedures
  • Information requests that seem excessive for stated purpose
  • Unfamiliar person requesting access or information
  • Claimed authority that can’t be verified
  • Emotional manipulation (urgency, flattery, intimidation)
  • Requests to circumvent security procedures

Technology can’t stop social engineering, but it can reduce attack surface:

Email security:

  • Advanced threat detection for phishing
  • DMARC, DKIM, SPF for sender verification
  • Warning banners for external emails
  • Link rewriting and sandboxing

Access controls:

  • Multi-factor authentication everywhere
  • Principle of least privilege
  • Separate credentials for sensitive systems
  • Physical access controls and visitor management

Policies that create friction for attackers:

Verification requirements:

  • Out-of-band confirmation for wire transfers
  • Callback procedures for sensitive requests
  • Identity verification for help desk calls
  • Visitor check-in and escort policies

Escalation paths:

  • Clear procedures for reporting suspicious contacts
  • No-retaliation policy for false positives
  • Security team contact information readily available

The most critical defense layer:

Effective training includes:

  • Recognition of attack techniques
  • Psychological awareness (understanding why we’re vulnerable)
  • Practical exercises (simulated phishing)
  • Clear reporting procedures
  • Regular reinforcement (not annual checkbox training)

Measure effectiveness through:

  • Phishing simulation click rates
  • Suspicious activity reporting rates
  • Time to report potential incidents
  • Post-incident analysis of successful attacks

Policies and training matter, but culture determines outcomes.

Executives must visibly follow security procedures. When the CEO ignores policies, employees conclude security isn’t actually important.

Celebrate employees who report suspicious activity, even false positives. The employee who reports 10 suspicious emails (including 9 that were legitimate) is protecting the organization. The employee who never reports anything is probably missing real threats.

Employees who fall for attacks should receive support and additional training, not punishment. Fear of blame drives concealment, which extends attacker access and increases damage.

Security awareness isn’t a training event. It’s an ongoing conversation. Regular updates about current threats, recent incidents (anonymized), and emerging techniques keep security top-of-mind.

When attacks succeed (and eventually they will):

  1. Contain: Isolate affected systems and accounts
  2. Preserve: Don’t delete evidence (logs, emails, files)
  3. Report: Notify security team immediately
  4. Document: Record timeline and actions taken
  • Determine attack scope and affected systems
  • Identify how attacker gained initial access
  • Assess what information was accessed or stolen
  • Document for potential legal proceedings
  • Reset affected credentials
  • Remediate compromised systems
  • Address procedural gaps that enabled attack
  • Update training based on lessons learned
  • Consider notification obligations (legal, regulatory)

Social engineering attacks succeed because they target human nature, not technology. The same traits that make us good colleagues, like trust, helpfulness, and respect for authority, become vulnerabilities when exploited by skilled attackers.

Defense requires layered approaches: technical controls to reduce attack surface, procedures to verify sensitive requests, training to build recognition skills, and culture to encourage vigilance without creating paranoia.

Your employees will always be your greatest vulnerability. With proper training and culture, they can also become your strongest defense.


Want to experience social engineering attack simulations firsthand? Try our free interactive security exercises and practice identifying threats in realistic scenarios.

Email Security Training: Protecting Your Organization from Email-Based Threats

Email security training - protected envelope with shield representing secure email practices

Email remains the primary attack vector. Despite decades of security investment, 91% of cyber attacks still begin with an email. Your employees receive these attacks daily, and a single click can compromise your entire organization.

Email security training transforms employees from potential victims into active defenders. When your workforce recognizes phishing attempts, verifies suspicious requests, and reports threats quickly, email-based attacks fail regardless of their sophistication.

Technical email security has improved. Spam filters catch obvious threats. Secure email gateways block known malicious domains. AI-powered solutions detect anomalies. Yet attacks keep succeeding.

The reason is simple: attackers adapt faster than technology. When filters block one tactic, attackers develop another. When detection catches patterns, attackers change patterns. The arms race between attackers and technology never ends.

Trained employees provide a different kind of defense. They apply judgment, recognize context, and identify threats that evade technical controls. A well-crafted spear phishing email might bypass every filter, but an employee who knows to verify unexpected requests stops the attack anyway.

Attack TypeAverage CostFrequencyPrimary Target
Business Email Compromise$125,000+Daily attemptsFinance, Executive
Ransomware (via email)$1.85 millionGrowing rapidlyAll employees
Credential Theft$4.5 million (breach)ConstantIT, Administrative
Data ExfiltrationVaries widelyRegular attemptsData handlers

These costs don’t include reputation damage, customer loss, or regulatory penalties. A single successful email attack often causes cascading harm far beyond the initial compromise.

Mass phishing casts a wide net, hoping some percentage of recipients click. These attacks mimic:

  • Account alerts (“Your password expires today”)
  • Shipping notifications (“Your package couldn’t be delivered”)
  • Financial warnings (“Unusual activity detected”)
  • IT requests (“Verify your credentials”)

While less sophisticated than targeted attacks, volume ensures success. If 1% of employees click and you have 1,000 employees, that’s 10 compromised accounts from a single campaign.

Targeted phishing uses research to create convincing messages for specific individuals. Attackers study LinkedIn profiles, company announcements, and social media to craft relevant lures.

A spear phishing email might reference:

  • Recent company news or projects
  • Specific colleagues by name
  • Actual vendors or partners
  • Real business processes

This personalization dramatically increases success rates compared to mass phishing.

BEC attacks impersonate trusted parties to manipulate employees into taking harmful actions, typically involving money or data.

Common BEC scenarios:

  • CEO fraud: Attacker poses as executive requesting urgent wire transfer
  • Vendor impersonation: Fake invoice with changed payment details
  • Attorney impersonation: Pressure for immediate action on “confidential” matter
  • Data theft: Request for employee records or financial information

BEC attacks cost organizations billions annually and often bypass technical controls entirely because they contain no malware or malicious links.

These attacks aim to steal login credentials through:

  • Fake login pages mimicking real services
  • “Password reset” requests that capture current credentials
  • “Account verification” forms requesting sensitive data

Stolen credentials enable further attacks, from email account takeover to network compromise.

Email delivers malware through:

  • Malicious attachments (documents, archives, executables)
  • Links to drive-by download sites
  • Embedded content that exploits vulnerabilities

Once malware executes, attackers gain foothold for ransomware deployment, data theft, or persistent access.

Train employees to examine emails critically:

Sender verification

  • Check actual email address, not just display name
  • Verify domain spelling (paypa1.com vs paypal.com)
  • Question unexpected emails from known contacts

Content red flags

  • Urgency demanding immediate action
  • Threats of negative consequences
  • Requests for credentials or sensitive data
  • Generic greetings instead of personal address
  • Grammar and spelling errors (though sophisticated attacks avoid these)

Link safety

  • Hover to preview destination before clicking
  • Verify URLs match expected destinations
  • Watch for misleading link text
  • Never enter credentials after clicking email links

Attachment caution

  • Question unexpected attachments
  • Be wary of uncommon file types
  • Enable protected view for Office documents
  • Report suspicious attachments before opening

Help employees understand (at a basic level) how email authentication works:

  • SPF, DKIM, DMARC: Technical standards that verify sender legitimacy
  • Why spoofing still works: Attackers use lookalike domains that pass authentication
  • What employees should do: Verify through independent channels, not email alone

Establish clear guidelines:

Never:

  • Send passwords or credentials via email
  • Click links in unexpected security alerts
  • Open attachments from unknown senders
  • Trust caller ID or sender names alone
  • Bypass verification procedures due to urgency

Always:

  • Verify unexpected requests through separate channels
  • Report suspicious emails even if uncertain
  • Use bookmarks or type URLs directly for sensitive sites
  • Confirm wire transfer or payment changes by phone
  • Check with IT security about questionable emails

Establish specific verification procedures:

Wire transfer requests:

  1. Call requester using known number (not from email)
  2. Verify authorization through documented approval chain
  3. Confirm account details independently
  4. Document verification steps

Vendor payment changes:

  1. Contact vendor using existing relationship contact
  2. Verify through multiple methods before implementing
  3. Implement waiting period for payment changes
  4. Flag and review all payment detail modifications

Credential requests:

  1. Never provide passwords via email regardless of sender
  2. Report all credential requests to IT security
  3. Navigate to sites directly rather than through email links
  4. Contact IT through known channels to verify legitimacy

Regular phishing simulations test employee recognition in realistic scenarios. Effective simulation programs:

  • Use varied attack types (different lures, tactics, sophistication levels)
  • Test all employees, including executives
  • Provide immediate feedback when employees click
  • Track progress over time
  • Focus on education, not punishment

Simulations build practical recognition skills that passive training cannot develop.

Hands-on exercises where employees practice:

  • Identifying phishing versus legitimate emails
  • Analyzing headers and sender information
  • Making decisions under realistic conditions
  • Reporting suspicious messages

Interactive training creates stronger learning than videos or documents alone.

Examine actual attacks to understand:

  • How sophisticated attacks unfold
  • Why victims fell for schemes
  • What warning signs existed
  • How similar attacks can be prevented

Real examples make abstract threats concrete and memorable.

Deliver training at relevant moments:

  • Education immediately after clicking simulation
  • Reminders during high-risk periods
  • Updates when new threats emerge
  • Reinforcement tied to actual email activity

Timely training maximizes relevance and retention.

Building an Email Security Training Program

Section titled “Building an Email Security Training Program”

Establish baseline through:

  • Initial phishing simulation to measure click rates
  • Survey to assess current knowledge
  • Review of past email security incidents
  • Identification of highest-risk roles

Deploy core email security education:

  • Email threat landscape overview
  • Recognition skills for common attacks
  • Reporting procedures and resources
  • Verification process training

All employees complete baseline training before advanced modules.

Launch regular phishing simulations:

  • Monthly simulations for all employees
  • Varied difficulty and attack types
  • Immediate feedback and education
  • Progress tracking and reporting

Simulations should feel like real attacks, not obvious tests.

Provide deeper training for specific needs:

  • Role-specific threat training (finance, executive, IT)
  • Emerging threat updates
  • Scenario-based exercises
  • Refresher training for struggling employees

Embed email security into organizational culture:

  • Recognition for reporting
  • Regular security communications
  • Leadership participation and messaging
  • Continuous improvement based on metrics

Measuring Email Security Training Effectiveness

Section titled “Measuring Email Security Training Effectiveness”
MetricBaselineTargetExcellent
Phishing click rate20-35%Under 10%Under 5%
Reporting rate10-20%Over 50%Over 70%
Time to reportDaysHoursUnder 1 hour
Repeat clickersCommonRareVery rare
  • Training completion rates
  • Assessment scores
  • Employee confidence levels
  • Incident reduction
  • Near-miss reports

Track improvement over time:

  • Click rate changes across simulations
  • Reporting rate growth
  • Response time improvements
  • Risk reduction across the organization

Finance teams face the highest-value email attacks:

Focus areas:

  • BEC and CEO fraud recognition
  • Invoice fraud detection
  • Payment change verification
  • Wire transfer security procedures

Simulations should include:

  • Fake executive requests
  • Vendor impersonation attempts
  • Urgency-based payment demands
  • Account detail change requests

Executives are prime targets for whaling attacks:

Focus areas:

  • High-value target awareness
  • Sophisticated attack recognition
  • Verification importance (even for “urgent” requests)
  • Leading by example

Simulations should include:

  • Board member impersonation
  • Legal urgency scenarios
  • Confidential matter requests
  • Time-sensitive authorization demands

IT employees face targeted attacks seeking system access:

Focus areas:

  • Credential theft recognition
  • System access request verification
  • Vendor and support impersonation
  • Insider threat awareness

Simulations should include:

  • Fake support requests
  • Credential reset attempts
  • System access demands
  • Technical support impersonation

Universal email security skills everyone needs:

  • Basic phishing recognition
  • Link and attachment safety
  • Reporting procedures
  • Password protection

Training works best alongside technical controls:

  • Email authentication (SPF, DKIM, DMARC)
  • Advanced threat protection
  • Link scanning and sandboxing
  • Attachment filtering
  • Impersonation detection
  • Multi-person approval for significant transactions
  • Out-of-band verification requirements
  • Payment change waiting periods
  • Documented authorization procedures
  • Easy reporting mechanisms (button in email client)
  • Clear escalation procedures
  • Feedback loops for reporters
  • Integration with security operations

Problem: Simulations designed to trick employees rather than train them. Impossible-to-detect tests create resentment without building skills.

Solution: Design simulations that challenge but are detectable with proper attention. The goal is education, not embarrassment.

Problem: Employees who click face public shaming, job consequences, or repeated remediation. This drives behavior underground rather than improving it.

Solution: Treat clicks as learning opportunities. Focus on improvement, provide support, and celebrate progress rather than punishing failure.

Problem: Annual training creates brief awareness that fades within weeks. Employees forget lessons before they encounter real attacks.

Solution: Maintain continuous touchpoints through monthly simulations, regular tips, and ongoing reinforcement.

Problem: Training uses examples irrelevant to employees’ actual work. Accountants need different scenarios than engineers.

Solution: Customize simulations and training to reflect real threats facing specific roles and your industry.

Problem: Training emphasizes recognition but neglects reporting. Employees identify threats but don’t escalate them appropriately.

Solution: Make reporting easy, celebrate reporters, and track reporting metrics alongside click rates.

Email remains the primary path attackers use to reach your employees. Technical controls block many threats but cannot stop sophisticated attacks that exploit human judgment. Email security training fills this gap.

Effective programs combine knowledge (understanding threats), practice (realistic simulations), and culture (encouraging reporting). They treat employees as partners in security rather than problems to be managed.

The investment pays returns beyond security metrics. Organizations with strong email security training experience fewer incidents, faster detection when attacks occur, reduced breach impact, and employees who feel empowered rather than victimized.

Your employees will receive malicious emails. With proper training, they’ll recognize and report them instead of clicking.


Build practical email security skills through hands-on practice. Try our free phishing simulation exercises and experience interactive training that develops real threat recognition abilities.

How to Spot Phishing: The Visual and Technical Signs That Reveal Fraud

Phishing detection - magnifying glass over email revealing fraud

You know what phishing looks like. Misspelled words, suspicious links, Nigerian princes. You’ve done the training. You’ve passed the tests.

And yet.

Somewhere, right now, someone who knows all of this is clicking a link they shouldn’t. Not because they’re careless or stupid, but because they’re busy, distracted, and the email looked just legitimate enough.

Phishing detection isn’t about knowledge. It’s about habits that kick in automatically, even when you’re not thinking clearly.

Most phishing fails a quick sanity check. The problem is we don’t do the check. We see an email, we react, we click. The trick is building a pause into that reaction:

  1. Was this expected? Unexpected requests for credentials, payments, or sensitive data are suspicious by default.

  2. Does the context make sense? An “account locked” email for a service you don’t use is obviously fake. But even for services you do use, did you do anything that would trigger this?

  3. Who sent this? Look at the actual email address, not just the display name. “PayPal Security” from security-paypal@mail-verify.net is not PayPal.

Most phishing attempts fail this 3-second test. The ones that pass deserve closer scrutiny.

URLs are the hardest thing for attackers to fake. Learn to read them.

https://account.paypal.com/login breaks down as:

  • https:// - Protocol (should be HTTPS for any login)
  • account.paypal.com - Domain (this is what matters)
  • /login - Path (less important for legitimacy)

The domain is everything between :// and the next /. Within that domain, read right to left:

  • paypal.com - This is the actual domain (owned by PayPal)
  • account. - This is a subdomain (controlled by whoever owns paypal.com)

Attackers use several tricks:

Subdomain deception:

  • paypal.account-verify.com - The domain is account-verify.com, not PayPal
  • secure-paypal.com.malicious.net - The domain is malicious.net

Typosquatting:

  • paypai.com (lowercase L instead of lowercase l)
  • paypa1.com (number 1 instead of lowercase l)
  • paypal-secure.com (adding words to legitimate brand)

Homograph attacks:

  • Using characters from different alphabets that look identical
  • pаypal.com using Cyrillic ‘а’ instead of Latin ‘a’

On desktop, hover over links to see their destination before clicking. On mobile, long-press links to preview URLs.

If the displayed text says “www.paypal.com” but the link goes elsewhere, that’s phishing.

Email display names can be anything. The actual address matters.

Legitimate:

  • service@paypal.com
  • noreply@email.chase.com

Suspicious:

  • paypal-service@gmail.com
  • support@paypal.security-verify.com
  • alert@paypal.com.suspicious-domain.net

Urgency without specificity:

  • “Your account will be suspended in 24 hours” - What account? Why?
  • Legitimate services provide specific details about issues

Generic greetings:

  • “Dear Customer” or “Dear User” when legitimate emails would use your name

Grammar and formatting:

  • Legitimate companies have professional copywriters and QA processes
  • Errors suggest rushed, non-professional origin

Mismatched branding:

  • Wrong logo colors, fonts, or layouts
  • Images that look stretched or pixelated
  • Footer information that doesn’t match the claimed sender

Be especially cautious of:

  • Unexpected attachments from anyone
  • File types that can execute code (.exe, .js, .html, .zip with executables)
  • “Invoice” or “Document” attachments you didn’t expect
  • Password-protected files (attackers use this to bypass security scanners)

When you reach a website (whether through email link or direct navigation), verify legitimacy before entering credentials.

HTTPS with a valid certificate is necessary but not sufficient. Attackers get SSL certificates too.

What to check:

  • Click the padlock icon → View certificate details
  • Verify the certificate is issued to the expected organization
  • Check the certificate isn’t expired

What certificates DON’T tell you:

  • That the site is legitimate
  • That your data is safe
  • That you should trust the organization

A phishing site can have a perfectly valid SSL certificate.

Compare against your memory of the legitimate site:

  • Are colors exactly right?
  • Is the logo correct?
  • Is the layout what you expect?
  • Do fonts look professional?

When in doubt, navigate directly to the site by typing the URL or using a bookmark. Don’t trust links.

Phishing sites often only implement the pages needed for credential theft.

Signs of a fake:

  • Footer links that go nowhere or to unrelated pages
  • “Forgot password” or “Create account” links that don’t work
  • Missing functionality that the real site would have
  • Error messages that don’t make sense

Check when a domain was registered:

  • Legitimate company domains are typically years old
  • Phishing domains are often registered days or weeks before attacks

Use whois command or online tools to check domain age.

Search certificate transparency logs for the domain to see:

  • When certificates were issued
  • How many certificates exist for the domain
  • Whether the certificate history matches expectations

For technical users:

  • Inspect network requests to see where data is actually sent
  • Check for suspicious JavaScript
  • Look at form action URLs
  1. Don’t click anything in the suspicious message
  2. Report it - Forward to your IT security team or use the report phishing button
  3. Delete it - Remove from inbox to avoid accidental future clicks

If You Clicked But Didn’t Enter Information

Section titled “If You Clicked But Didn’t Enter Information”
  1. Close the tab immediately
  2. Clear your browser cache
  3. Run a malware scan
  4. Monitor for unusual activity
  1. Change password immediately on the legitimate site
  2. Enable 2FA if not already active
  3. Check for unauthorized activity in the affected account
  4. Report the incident to IT security
  5. Monitor related accounts - if you reuse passwords, change those too

Make verification automatic, not exceptional:

  • Always check sender addresses
  • Always hover over links before clicking
  • Always navigate directly for sensitive actions

Assume unexpected requests are suspicious until verified:

  • Banks don’t email asking for credentials
  • Tech support doesn’t call unsolicited
  • Legitimate urgency comes with verifiable specifics

If a request might be legitimate:

  • Call the company using a number from their official website (not from the email)
  • Navigate directly to the service and check your account
  • Contact the purported sender through a known-good method

For organizations building phishing detection capabilities:

Regular simulated phishing campaigns:

  • Establish baseline click rates
  • Provide immediate education when employees click
  • Track improvement over time
  • Adjust difficulty as skills improve

Make reporting easy:

  • One-click phishing report buttons in email clients
  • No penalties for reporting false positives
  • Feedback on reported items to reinforce good behavior

Ongoing touchpoints:

  • Brief reminders about current phishing trends
  • Examples of real attacks targeting your industry
  • Recognition for employees who catch and report attempts

Here’s what I’ve learned watching thousands of people go through phishing simulations: the ones who catch attacks aren’t the most security-aware. They’re the ones who’ve built checking into their workflow.

They hover over every link. Not because they’re suspicious of that specific email, but because that’s just what they do. They verify sender addresses the way they check their mirrors before changing lanes. Automatic.

The goal isn’t to become paranoid. It’s to make verification so routine that you don’t have to think about it.

Most phishing attempts are obvious once you look. The trick is remembering to look when you’re tired, rushed, or just trying to get through your inbox before lunch.


Build detection habits through practice, not just training. Try our interactive security exercises with phishing scenarios designed to test your reflexes, not just your knowledge.