High-Risk AI: Deployer Obligations
Review and ensure compliance across all seven EU AI Act requirement areas before deploying a high-risk AI system.
What Is High-Risk AI: Deployer Obligations?
Walk through a real compliance review for a high-risk AI system being deployed for employee performance reviews. Learn the seven requirement areas that must be satisfied before deployment and practice identifying critical gaps that block launch.
What You'll Learn in High-Risk AI: Deployer Obligations
- Identify the seven requirement areas for high-risk AI systems under the EU AI Act
- Conduct a compliance assessment for a high-risk AI deployment
- Recognize that human oversight cannot be replaced by accuracy metrics
- Understand that a Fundamental Rights Impact Assessment is mandatory before first use
- Respond correctly when compliance gaps are discovered before deployment
High-Risk AI: Deployer Obligations — Training Steps
-
High-Risk AI Requirements
High-risk AI systems have the strictest obligations under the EU AI Act. Before deploying one, organizations must satisfy requirements across seven areas: Risk management - Identify and mitigate risks throughout the system lifecycle Data governance - Ensure training data is relevant, representative, and free from bias Technical documentation - Maintain detailed documentation of the system's design and function Record-keeping - Enable audit trails and automatic logging Transparency - Provide clear information to deployers and affected persons Human oversight - Ensure humans can effectively supervise the system Accuracy, robustness, and cybersecurity - Meet performance and security standards This exercise walks through the process of reviewing and ensuring compliance for a real deployment.
-
Email from the CISO
An email arrives from James Morton, Pinnacle Group's CISO. The AI vendor has delivered the employee performance review system, and Alice must complete the compliance review before it goes live. The email links directly to the AI Systems Registry on the governance portal.
-
AI Systems Registry
The AI Systems Registry loads with three entries: PerformanceAI (the new high-risk system flagged for review), plus two already-compliant systems (EmailGuard and ChatAssist). The registry tracks risk tier, compliance status, and audit history for every AI system at Pinnacle Group.
-
Compliance Assessment
The PerformanceAI system has clear gaps in its compliance status: the Fundamental Rights Impact Assessment has not been started, no human reviewer has been assigned, and data governance is still pending review. Alice clicks Run Compliance Assessment inside the PerformanceAI entry to open the detailed checklist for this system.
-
Flag the Gaps
Alice works through each requirement section and flags only the items that are NOT satisfied. Data Governance has an unaddressed gap: training data representativeness across employee demographics has never been assessed. Human Oversight has another: no qualified human reviewer has been assigned with override authority. The remaining sections are in good shape and need nothing flagged.
-
Human Oversight Requirement
During the review, Alice encounters a note from the vendor: 'Human oversight is not needed because the algorithm has 99.2% accuracy in performance scoring.' This claim needs to be evaluated against what the EU AI Act actually requires.
-
Critical Gaps Identified
The compliance review has surfaced three critical gaps that must be resolved before PerformanceAI can go live: No Fundamental Rights Impact Assessment - The FRIA has not been conducted. This is mandatory before any high-risk AI deployment under Article 27. No designated human reviewer - No person has been assigned with the authority and training to override the AI's performance scores. Article 14 requires effective human oversight. Data representativeness not assessed - Training data has not been evaluated for representativeness across employee demographics, creating potential bias risk under Article 10. Each of these gaps represents a legal compliance failure. The system cannot be deployed until all three are resolved.
-
Remediation Email
Alice drafts an email back to the CISO detailing the compliance gaps and recommending that the launch be postponed until all issues are resolved.
-
Compliance Before Deployment
High-risk AI deployment is not just 'install and configure.' It is a structured compliance process with ongoing obligations. The seven requirement areas are not checkboxes to rush through - they protect people affected by AI decisions. Key takeaways: All seven requirement areas must be independently satisfied before deployment: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy/robustness/cybersecurity. A Fundamental Rights Impact Assessment is mandatory before first use of any high-risk AI system. No FRIA means no deployment. Human oversight cannot be replaced by accuracy . Even a 99.9% accurate system requires a designated human with override authority. If your compliance review finds gaps, the correct response is to delay deployment until they are resolved - not to deploy and fix later.