Skip to content

Blog

Data Classification Training for Employees

Four data classification folders arranged by sensitivity level from public to restricted, each with progressively stronger lock symbols

An account manager at a healthcare company needed to share patient outcome data with a prospective partner. She opened the company’s analytics dashboard, exported a CSV, and emailed it to the partner’s Gmail address. The export included patient names, treatment dates, and billing codes. She did not realize any of this was in the file. She had only wanted the aggregate numbers.

The company discovered the incident two weeks later during a routine DLP review. By then, the email had been forwarded internally at the partner organization. HIPAA breach notification was required. Legal costs, remediation, and fines totaled over $200,000. All because one employee could not tell the difference between aggregate statistics and protected health information in a spreadsheet.

This type of incident happens constantly. Not because employees are careless, but because nobody taught them how to look at data and ask: “What am I actually holding?”

Password Security Training That Changes Behavior

Password security progression from a broken lock with weak passwords through a vault representing a password manager to an MFA shield with a one-time code

A financial services firm rolled out its annual password policy update. Minimum 12 characters, one uppercase, one number, one special character. Employees complied. Security felt good. Then a red team engagement three months later found that 38% of employees had chosen variations of “Company2026!” and that nearly half were reusing their corporate password on personal services.

The policy was technically met. The behavior it was supposed to create never materialized.

This pattern repeats across industries. Organizations invest in password rules and compliance checklists, then wonder why credential-based attacks keep succeeding. The problem is not that employees lack awareness. Most people know password reuse is risky. The problem is that knowing something is risky does not automatically produce the alternative behavior.

AI-Powered Phishing: How LLMs Write Better Lures

AI-powered phishing - LLM neural network generating targeted phishing emails to multiple victims

A phishing email arrives in your inbox. It references a project you’re working on, names your manager correctly, mimics the writing style of your IT department, and asks you to verify your credentials after a “suspicious login from São Paulo.” No typos. No awkward phrasing. No generic “Dear Customer” greeting. It reads exactly like a legitimate message from your company.

Two years ago, writing this email required a human attacker who spent hours researching your organization, your role, and your communication patterns. Today, an LLM produces it in seconds. Feed it a few LinkedIn profiles and a sample company email, and it generates dozens of personalized variants, each tailored to a different target, in any language.

This is why traditional phishing detection advice about spotting grammatical errors and suspicious formatting is becoming unreliable. The signals employees were trained to look for are disappearing.

OWASP Agentic AI Top 10: Security Risks When AI Acts on Its Own

OWASP Agentic AI Top 10 - interconnected AI agents with cascading failure visualization

An AI agent at a fintech company was tasked with resolving a customer’s billing dispute. It accessed the billing system, issued a refund, then escalated the ticket internally. Along the way it read the customer’s full payment history, forwarded account details to an external logging service it had been configured to use, and modified the customer’s subscription tier without approval. Every action was technically within the permissions it had been granted.

Nobody told the agent to do most of that. It chained together actions it deemed logical. Each step made sense in isolation. Together, they created a data exposure incident that took weeks to untangle.

This is the class of risk the OWASP Agentic AI Top 10 was built to address. Not the vulnerabilities of the language model itself, but the dangers that emerge when AI systems act autonomously across multiple tools, APIs, and data sources.

Deepfake Social Engineering: How to Train Employees in 2026

Deepfake social engineering - split view comparing a real person and their AI-generated deepfake clone

Your CFO joins a video call with the Hong Kong finance team. She asks them to execute a series of wire transfers totaling $25 million. Her face, her voice, her mannerisms. The team complies. The entire call was a deepfake.

This happened to Arup, the British engineering firm, in early 2024. The attackers recreated the CFO and several other executives using publicly available video footage. Every person on that call except the target was synthetic.