Prohibited AI Practices
Identify AI deployments that cross the EU AI Act's red lines and must be stopped immediately.
What Is Prohibited AI Practices?
The EU AI Act draws clear red lines. Article 5 bans six categories of AI practices outright, with the highest penalties in the regulation: up to 35 million euros or 7% of global turnover. In this exercise, you recognize prohibited deployments across your organization — including emotion recognition in the workplace, social scoring of customers, and untargeted scraping of facial images for biometric databases — and take the right action before each system goes live.
What You'll Learn in Prohibited AI Practices
- Identify the six categories of AI practices prohibited under Article 5 of the EU AI Act
- Recognize prohibited practices even when disguised as beneficial tools or voluntary programs
- Understand that employee consent does not override Article 5 prohibitions
- Know the correct escalation path when discovering a prohibited AI deployment
- Understand the penalty tier for prohibited practices (35M euros / 7% turnover)
Prohibited AI Practices — Training Steps
-
The Absolute Red Lines
Article 5 of the EU AI Act defines certain AI practices that are outright banned in the EU. No exceptions, no workarounds. Penalties for prohibited practices are the highest tier under the Act: up to 35 million euros or 7% of global annual turnover, whichever is higher. The six categories of prohibited AI include: Manipulative or deceptive AI techniques that distort behavior AI exploiting vulnerabilities of specific groups (age, disability, social situation) Social scoring - evaluating people based on social behavior for unrelated detrimental treatment Real-time remote biometric identification in public spaces (with narrow law enforcement exceptions) Biometric categorization by sensitive attributes (race, political opinions, sexual orientation) Emotion recognition in workplace and educational settings
-
Day 1: The Mood Tracker Email
Alice receives an email from the HR Director announcing a new pilot program. The email describes an AI tool that will monitor employees during video meetings.
-
Recognizing the Violation
This is emotion recognition in the workplace , which is explicitly prohibited under Article 5(1)(f) of the EU AI Act. It does not matter that it is framed as a 'wellness initiative' or that participation is described as voluntary. The prohibition is absolute. Any AI system that infers emotions from biometric data (facial expressions, voice tone, body language) in the workplace is banned - regardless of the stated purpose, the level of anonymization, or whether employees consent.
-
Escalating to Compliance
Alice recognizes the compliance risk immediately. She replies to David's email, flagging the issue and copying the Data Protection Officer.
-
Knowledge Check
Before moving on, let's make sure the concept is clear.
-
Day 3: The Social Scorer
Two days later, Alice receives a WhatsApp message from a colleague about a concerning new CRM feature the sales team has activated.
-
Spotting Social Scoring
Alice reads Jamie's message carefully. Two details stand out: customers are being scored on social media activity, and low-scoring customers are being routed to slower support queues.
-
Understanding Social Scoring
This is social scoring , prohibited under Article 5(1)(c). Using AI to evaluate people based on their social behavior or personal characteristics - and then using those scores to treat them detrimentally - is banned. Even though the data comes from legitimate business interactions (purchase returns, support tickets), aggregating it into a 'trustworthiness score' that determines service quality crosses the line. The prohibition applies regardless of whether the scoring happens to existing customers or prospective ones.
-
Day 5: The Facial Scraper
Two days later, Alice discovers that the marketing team has built an internal tool demo. She opens it in her browser to review what they have been working on.
-
Untargeted Facial Scraping
The FaceWatch tool scrapes publicly available photos from social media profiles to build a facial recognition database. Under Article 5(1)(e), untargeted scraping of facial images from the internet or CCTV footage to build or expand facial recognition databases is explicitly prohibited.