Skip to content

LLM security

5 posts with the tag “LLM security”

AI Data Leakage: How Employees Expose Secrets to ChatGPT, Claude, and Copilot

AI data leakage illustration - employee pasting confidential code into a chatbot window with data flowing to external servers

Samsung’s semiconductor division banned ChatGPT in May 2023 after three employees leaked confidential data in under a month. One engineer pasted proprietary source code to debug an error. Another submitted internal meeting notes to generate a summary. A third uploaded chip manufacturing measurements to get yield calculations. Each person was trying to do their job faster. Each left a copy of Samsung’s trade secrets on an OpenAI server.

Within weeks, Apple, JPMorgan, Bank of America, Verizon, Amazon, Goldman Sachs, and Deutsche Bank had followed with their own restrictions. The calculus was the same at every company. The productivity gains were real, but so was the risk of employees turning consumer AI tools into a data exfiltration channel nobody had authorized.

Two years later, the bans have softened into policies, and the policies have softened into training gaps. Most employees still don’t understand what happens to the text they paste into an AI chat window. This is the core of OWASP LLM02, the sensitive information disclosure risk that sits second on the OWASP Top 10 for LLM Applications.

OWASP Top 10 for LLM Applications: 10 free training exercises now live

OWASP Top 10 for LLM Applications training course - terminal showing all 10 exercises live with checkmarks

Every risk category in the OWASP Top 10 for LLM Applications now has a dedicated training exercise on RansomLeak. Ten exercises covering ten attack scenarios, from prompt injection to denial-of-wallet. All free, no account required.

The OWASP Top 10 for LLM Applications is the industry standard for categorizing AI security risks. This course turns each category into a hands-on simulation where employees experience these attacks firsthand in realistic workplace scenarios.

AI-Powered Phishing: How LLMs Help Attackers Write Better Lures

AI-powered phishing - LLM neural network generating targeted phishing emails to multiple victims

A phishing email arrives in your inbox. It references a project you’re working on, names your manager correctly, mimics the writing style of your IT department, and asks you to verify your credentials after a “suspicious login from São Paulo.” No typos. No awkward phrasing. No generic “Dear Customer” greeting. It reads exactly like a legitimate message from your company.

Two years ago, writing this email required a human attacker who spent hours researching your organization, your role, and your communication patterns. Today, an LLM produces it in seconds. Feed it a few LinkedIn profiles and a sample company email, and it generates dozens of personalized variants, each tailored to a different target, in any language.

This is why traditional phishing detection advice about spotting grammatical errors and suspicious formatting is becoming unreliable. The signals employees were trained to look for are disappearing.

OWASP Agentic AI Top 10: Security Risks When AI Acts on Its Own

OWASP Agentic AI Top 10 - interconnected AI agents with cascading failure visualization

An AI agent at a fintech company was tasked with resolving a customer’s billing dispute. It accessed the billing system, issued a refund, then escalated the ticket internally. Along the way it read the customer’s full payment history, forwarded account details to an external logging service it had been configured to use, and modified the customer’s subscription tier without approval. Every action was technically within the permissions it had been granted.

Nobody told the agent to do most of that. It chained together actions it deemed logical. Each step made sense in isolation. Together, they created a data exposure incident that took weeks to untangle.

This is the class of risk the OWASP Agentic AI Top 10 was built to address. Not the vulnerabilities of the language model itself, but the dangers that emerge when AI systems act autonomously across multiple tools, APIs, and data sources.

OWASP Top 10 for LLM Applications: What Security Teams Get Wrong

OWASP Top 10 for LLM Applications - neural network with vulnerability categories

OWASP published its first Top 10 for Large Language Model Applications in 2023. Two years later, most security teams still treat “LLM risk” as a synonym for “prompt injection.” That’s like treating the OWASP Web Top 10 as if SQL injection were the only vulnerability that mattered.

The 2025 revision of the OWASP LLM Top 10 expanded and reorganized the list based on real-world incidents. Supply chain attacks replaced insecure plugins. System prompt leakage and vector embedding weaknesses got their own categories. The list reflects what attackers are actually doing, not what conference talks speculate about.

Your employees interact with LLMs daily. Customer support agents use chatbots. Marketing teams generate content. Developers lean on AI coding assistants for everything from debugging to architecture decisions. Each interaction is a potential attack surface, and your team probably doesn’t know it.