Skip to content

Blog

Callback Phishing (TOAD): No Links, All Danger

Callback phishing attack flow showing a fake invoice email leading to a phone call and remote access compromise

You get an email from “Norton LifeLock” confirming your annual renewal at $499.99. You did not buy Norton LifeLock. There is no link to click, no attachment to open. Just a phone number to call if “this charge was made in error.”

So you call it. The person who answers sounds professional, patient, and genuinely helpful. They ask you to visit a website and download a “cancellation tool” so they can process your refund. What you are actually downloading is remote access software. Within minutes, the person on the other end controls your machine.

No malicious link was clicked. No attachment was opened. Your email security caught nothing because there was nothing to catch.

This is callback phishing, and it is one of the fastest-growing attack types in corporate environments.

Credential Stuffing: How Leaked Passwords Work

Credential stuffing attack visualization showing a breached database, an automated bot, and multiple login forms being tested

In January 2024, a security team at a mid-size SaaS company noticed something odd. Over a single weekend, their authentication logs showed 340,000 failed login attempts across employee and customer-facing portals. The attempts came from thousands of IP addresses, rotating every few requests. Buried in the noise: 47 successful logins.

None of those 47 accounts had been brute-forced. The attackers already had the correct passwords. They had purchased a batch of stolen credentials from a 2023 breach of an unrelated service, and 47 employees had used the same email and password combination for both.

This is credential stuffing. Not a sophisticated exploit. Not a zero-day. Just a bet that people reuse passwords, and that bet pays off roughly 0.1% to 2% of the time. At scale, that is enough.

Insider Threat Awareness Training for Employees

Insider threat visualization showing an authorized employee with access badge alongside a data exfiltration timeline

A systems administrator at a defense contractor copies classified schematics to a personal USB drive over the course of three months. His badge still works. His credentials are valid. He passes the same security checks as everyone else. Nothing in the firewall logs, intrusion detection system, or email gateway catches a thing.

When the breach is finally discovered, it is not because a tool flagged it. A coworker noticed he was accessing project folders he had no business being in and mentioned it to their manager. That conversation, uncomfortable as it was, prevented months of additional exfiltration.

External attackers need to break in. Insiders are already inside.

Ransomware Awareness Training for Employees

Ransomware attack visualization showing encrypted files, a locked padlock, and a ransom note countdown timer

A finance team member opens a PDF labeled “Q4 Invoice Reconciliation.” The file came from what looks like a known vendor. Thirty seconds later, file extensions on her desktop start changing. Documents she opened yesterday now end in .locked. Programs freeze. A full-screen message appears with a Bitcoin address and a 48-hour countdown.

She pulls her ethernet cable. Calls IT. Does not touch the power button.

That instinct saved her company roughly two weeks of recovery time, because she had trained for this exact moment.

AI Coding Assistant Security Risks You Can't Ignore

AI coding assistant security risks - code editor with prompt injection attack visualization

Your developers are 10x more productive with AI coding assistants. So are the attackers targeting your organization.

In November 2025, Anthropic disclosed what security researchers had feared: the first documented case of an AI coding agent being weaponized for a large-scale cyberattack. A Chinese state-sponsored threat group called GTG-1002 used Claude Code to execute over 80% of a cyber espionage campaign autonomously. The AI handled reconnaissance, exploitation, credential harvesting, and data exfiltration across more than 30 organizations with minimal human oversight. This incident illustrates the broader agentic AI security risks that OWASP now tracks in a dedicated Top 10 list.

This wasn’t a theoretical exercise. It worked.

AI coding assistants have become standard in development workflows. GitHub Copilot. Amazon CodeWhisperer. Claude Code. Cursor. These tools autocomplete functions, debug errors, and write entire modules from natural language descriptions. Developers who resist them fall behind. Organizations that ban them lose talent.

But every line of code these assistants suggest passes through external servers. Every context window they analyze might contain secrets. Every prompt they accept could be an attack vector. The productivity gains are real. So are the risks.