Safe GenAI Usage
Use generative AI without leaking sensitive client data.
What Is Safe GenAI Usage?
Generative AI tools are part of daily work — translation, summarization, drafting, code review. They are also one of the fastest ways to leak confidential client data into a system your employer has no contractual control over. In this simulation you handle an urgent Spanish-language email from a client who is chasing a quarterly report. Under deadline pressure you reach for a free consumer chatbot, paste the entire email — including the client's name, account number, approved budget, assigned consultants, and board meeting timing — and get a fast translation. Hours later your DLP system flags the transmission as a confidential data exposure. The exercise then walks you through the four rules of safe GenAI use: tool choice (approved enterprise AI only), prompt sanitization (replace names, accounts, and amounts with placeholders before pasting), the absolute prohibition on pasting secrets like passwords or API keys, and verifying AI output before delivery. You practice the right workflow on a second client email — sanitizing the prompt, using the approved enterprise tool, and re-personalizing the response in the secure email channel. The training closes with a knowledge check on what makes a prompt safe to send and how to respond if you accidentally leak data into a consumer AI tool.
What You'll Learn in Safe GenAI Usage
- Distinguish between consumer AI chatbots (no Data Processing Agreement, prompts may be retained for training) and approved enterprise AI tools (audit-logged, contractually protected, DPA-covered)
- Sanitize prompts before sending — replace client names, account numbers, deal values, and other identifying details with neutral placeholders so the AI receives only what it needs to do the task
- Recognize the categories of data that must never enter any AI tool, including approved ones — passwords, API keys, access tokens, and customer credentials
- Re-personalize AI output inside a secure channel rather than asking the AI to handle sensitive details directly, keeping the audit log free of unnecessary PII
- Respond correctly to an accidental data leak into a consumer AI tool — preserve the conversation as evidence, report to IT Security immediately, and avoid the false comfort of deleting the chat
Safe GenAI Usage — Training Steps
-
A Busy Monday at Alderwood
It's 9:14 AM. You're catching up on overnight client emails before the partner standup at 10:00. One subject line catches your eye — a Spanish-speaking client marked their message URGENTE.
-
Urgent Client Email
An email from Diego Vargas, the CFO of Cresgrove Investments, lands in your inbox. He's chasing a quarterly report you owe him before his board meeting on Friday. The message is in Spanish and contains a lot of engagement details.
-
The Tempting Shortcut
Your Spanish is rusty and the standup is in 45 minutes. Alderwood has an approved enterprise AI tool, but it requires SSO and you'd need to look up the link. Meanwhile a free chatbot — SmartGen AI — is one tab away in your bookmarks. Just one quick translation, you tell yourself.
-
Pasting the Whole Email
You type a quick translation instruction, then paste the entire body of Diego's email straight after it and hit send. Translation in seconds — what could go wrong?
-
SmartGen AI Responds
SmartGen AI returns a clean English translation in two seconds. Exactly what you needed. But something else is sitting at the top of the chat — a small amber banner you've never paid attention to before.
-
The Data Retention Notice
That amber banner has a quiet, easy-to-miss message: 'Your conversation may be used to improve SmartGen AI.' On the free tier, that line is all the data protection you get.
-
What You Just Exposed
Look at the prompt you actually sent. Every detail of Diego's message is now stored on a third-party server.
-
Replying to Diego
You copy SmartGen AI's translation, switch back to email, draft a polite reply with a Friday morning commitment, and hit send. Crisis averted — at least, that's how it feels.
-
DLP Alert from IT Security
Your inbox pings. The subject line makes your stomach drop: 'DATA INCIDENT: Confidential Client Data Sent to Unapproved AI Service' . Alderwood's Data Loss Prevention system saw the whole thing.
-
Opening the Refresher Portal
You click the portal link in the email to start the GenAI Acceptable Use refresher.