Safe GenAI Usage

Use generative AI without leaking sensitive client data.

What Is Safe GenAI Usage?

Generative AI tools are part of daily work — translation, summarization, drafting, code review. They are also one of the fastest ways to leak confidential client data into a system your employer has no contractual control over. In this simulation you handle an urgent Spanish-language email from a client who is chasing a quarterly report. Under deadline pressure you reach for a free consumer chatbot, paste the entire email — including the client's name, account number, approved budget, assigned consultants, and board meeting timing — and get a fast translation. Hours later your DLP system flags the transmission as a confidential data exposure. The exercise then walks you through the four rules of safe GenAI use: tool choice (approved enterprise AI only), prompt sanitization (replace names, accounts, and amounts with placeholders before pasting), the absolute prohibition on pasting secrets like passwords or API keys, and verifying AI output before delivery. You practice the right workflow on a second client email — sanitizing the prompt, using the approved enterprise tool, and re-personalizing the response in the secure email channel. The training closes with a knowledge check on what makes a prompt safe to send and how to respond if you accidentally leak data into a consumer AI tool.

What You'll Learn in Safe GenAI Usage

Safe GenAI Usage — Training Steps

  1. A Busy Monday at Alderwood

    It's 9:14 AM. You're catching up on overnight client emails before the partner standup at 10:00. One subject line catches your eye — a Spanish-speaking client marked their message URGENTE.

  2. Urgent Client Email

    An email from Diego Vargas, the CFO of Cresgrove Investments, lands in your inbox. He's chasing a quarterly report you owe him before his board meeting on Friday. The message is in Spanish and contains a lot of engagement details.

  3. The Tempting Shortcut

    Your Spanish is rusty and the standup is in 45 minutes. Alderwood has an approved enterprise AI tool, but it requires SSO and you'd need to look up the link. Meanwhile a free chatbot — SmartGen AI — is one tab away in your bookmarks. Just one quick translation, you tell yourself.

  4. Pasting the Whole Email

    You type a quick translation instruction, then paste the entire body of Diego's email straight after it and hit send. Translation in seconds — what could go wrong?

  5. SmartGen AI Responds

    SmartGen AI returns a clean English translation in two seconds. Exactly what you needed. But something else is sitting at the top of the chat — a small amber banner you've never paid attention to before.

  6. The Data Retention Notice

    That amber banner has a quiet, easy-to-miss message: 'Your conversation may be used to improve SmartGen AI.' On the free tier, that line is all the data protection you get.

  7. What You Just Exposed

    Look at the prompt you actually sent. Every detail of Diego's message is now stored on a third-party server.

  8. Replying to Diego

    You copy SmartGen AI's translation, switch back to email, draft a polite reply with a Friday morning commitment, and hit send. Crisis averted — at least, that's how it feels.

  9. DLP Alert from IT Security

    Your inbox pings. The subject line makes your stomach drop: 'DATA INCIDENT: Confidential Client Data Sent to Unapproved AI Service' . Alderwood's Data Loss Prevention system saw the whole thing.

  10. Opening the Refresher Portal

    You click the portal link in the email to start the GenAI Acceptable Use refresher.