Skip to main content
EU AI regulation

What is EU AI Act Training

EU AI Act training is the workforce duty in Article 4 of Regulation (EU) 2024/1689 that took effect on 2 February 2025 for every provider and deployer of AI systems used in the European Union. The obligation applies regardless of risk tier, and the staged penalty regime reaches €35 million or 7% of global annual turnover for the most serious breaches.

By Last reviewed

EU AI Act Article 4 makes AI literacy training mandatory for every provider and deployer of AI systems

The Artificial Intelligence Act, formally Regulation (EU) 2024/1689, is the first horizontal AI law in the world. It entered into force on 1 August 2024 and applies in stages between 2025 and 2027. The regulation governs providers, deployers, importers, and distributors of AI systems used inside the European Union. It also reaches outside the EU when the output of an AI system is used in the Union, which pulls a large slice of the global AI economy into scope.

The first set of obligations is already live. Since 2 February 2025, the prohibited AI practices listed in Article 5 (social scoring, untargeted facial-image scraping, real-time remote biometric identification in public spaces by law enforcement, manipulative or exploitative systems) cannot be placed on the EU market. The same date triggered the Article 4 AI literacy duty for every provider and deployer, regardless of whether the AI systems they build or use are high-risk, limited-risk, or minimal-risk. The literacy duty is the broadest single obligation in the regulation because it applies to staff, contractors, and any other person operating AI systems on the entity's behalf.

On 2 August 2025 the General-Purpose AI (GPAI) model rules and the governance chapter became applicable, including the EU AI Office, national competent authorities, and the Article 99 penalty regime. On 2 August 2026 the high-risk AI obligations apply for the systems listed in Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice, democratic processes). Full applicability for high-risk systems embedded in regulated products under Annex I follows on 2 August 2027. Penalties are tiered: up to €35M or 7% of global annual turnover for prohibited-practice breaches, up to €15M or 3% for most other obligations including the literacy and deployer duties, and up to €7.5M or 1.5% for misleading information to authorities.

For most organisations Article 4 literacy training is the cheapest first step toward overall AI Act readiness. It is enforceable now, and the documentation it produces feeds directly into the deeper obligations that arrive in 2026 and 2027 (human oversight under Article 14, deployer duties under Article 26, fundamental rights impact assessments under Article 27, and transparency under Article 50). The rest of this page maps the regulation, the enforcement posture building inside the AI Office and the national authorities, and the role-based exercise approach that produces evidence the regulator will accept.

How the EU AI Act governs AI literacy and workforce training

1

Scope: provider, deployer, importer, or distributor

The regulation defines four roles and assigns different duties to each. A provider develops an AI system or has one developed and places it on the EU market under its own name or trademark. A deployer uses an AI system under its authority in the course of a professional activity (a bank running a credit-scoring model, a hospital using a triage tool, a recruiter screening CVs with an AI shortlister). An importer places an AI system from a third country on the EU market. A distributor makes it available without altering it. Most enterprises are deployers of many systems and providers of a few. Article 4 literacy applies to providers and deployers in equal measure.

2

Article 4: the AI literacy duty in force since 2 February 2025

Article 4 requires providers and deployers to take measures to ensure a sufficient level of AI literacy of their staff and any other person dealing with the operation and use of AI systems on their behalf. Literacy must be calibrated to the technical knowledge, experience, education, and training of the people involved, the context the systems will be used in, and the persons or groups the systems are to be used on. The EU AI Office published a Living Repository and Q&A on Article 4 in early 2025 explaining that compliance demands a programme, not a one-off seminar.

3

Risk tiers: prohibited, high-risk, limited risk, minimal risk

Article 5 prohibits eight categories outright. Annex III lists eight domains where systems are high-risk by default and subject to the heaviest controls in Articles 8 to 27, including risk management, data governance, transparency to deployers, human oversight, and post-market monitoring. Limited-risk systems (chatbots, emotion recognition, biometric categorisation, deepfakes) carry the Article 50 transparency duties. Minimal-risk systems carry no additional obligations beyond Article 4 literacy. The risk class drives which deeper training duties layer on top of the baseline literacy programme.

4

Articles 14 and 26: human oversight and deployer obligations

Article 14 requires providers of high-risk AI systems to design them so they can be effectively overseen by natural persons during the period the system is in use. Article 26 puts the operating side of that duty on deployers: assign oversight to natural persons who have the necessary competence, training, authority, and support. A bank running a high-risk credit model must name oversight personnel, train them on the model's capabilities and limitations, and give them real authority to override outputs. The training record for those named overseers is the single most likely document a national authority will request first.

5

Article 27: Fundamental Rights Impact Assessment for public-sector and essential-service deployers

Article 27 requires deployers that are bodies governed by public law, private operators providing public services, and operators of certain high-risk systems (creditworthiness, life and health insurance pricing) to perform a Fundamental Rights Impact Assessment (FRIA) before first use. The FRIA covers the intended purpose, the period and frequency of use, the categories of persons affected, the specific risks of harm, the human oversight measures, and the mitigations. The assessment must be communicated to the national market surveillance authority. Preparing a FRIA is itself a literacy-heavy exercise that benefits from structured training across legal, risk, product, and engineering teams.

6

Article 50: transparency for chatbots, deepfakes, and GPAI output

Article 50 applies to limited-risk systems and to GPAI output. Providers of AI systems that interact with natural persons must inform them they are interacting with an AI unless it is obvious. Deployers of emotion recognition or biometric categorisation must inform the persons exposed. Deployers that generate or manipulate image, audio, or video content constituting deepfakes must disclose the artificial nature. Providers of GPAI models that produce synthetic content must mark the output in a machine-readable format. Each duty translates into specific staff-side workflows and disclosures that workforce training has to cover.

7

Article 99 penalties, the EU AI Office, and national competent authorities

Enforcement runs through national competent authorities (each member state designates at least one notifying authority and one market surveillance authority) and the EU AI Office sitting inside DG CNECT in Brussels. The AI Office handles GPAI supervision directly and coordinates the European Artificial Intelligence Board. Article 99 penalties are tiered: up to €35M or 7% of global annual turnover for Article 5 breaches; up to €15M or 3% for non-compliance with most other obligations including literacy, transparency, and deployer duties; up to €7.5M or 1.5% for misleading information. SMEs and start-ups face the lower of the two values rather than the higher.

How regulators and the EU AI Office are interpreting Article 4 AI literacy

EU AI Office Article 4 guidance and Q&A, published early 2025

The Commission's AI Office published a Living Repository on AI Literacy and a public Article 4 Q&A in the first months of 2025. The Q&A clarifies that "sufficient level" is calibrated case by case rather than benchmarked against a single curriculum, that the duty applies to staff and contractors regardless of role seniority, and that providers and deployers should keep documentation describing the literacy measures they have put in place. The Office has been explicit that a single onboarding video does not satisfy Article 4. The Living Repository invites organisations to share programme designs, which has produced a public bench of acceptable patterns from financial services, public administration, and large industrial deployers.

National competent authority preparation across France, Spain, Italy, Germany, and the Netherlands

Member states are standing up the supervisory architecture in parallel with the staged applicability dates. Spain created the Agencia Española de Supervisión de Inteligencia Artificial (AESIA) in La Coruña, the first dedicated national AI agency in the EU. France has positioned the CNIL as the candidate lead authority. Italy has split the work between AgID for AI policy and the Garante for privacy-related AI supervision. Germany has signalled the BNetzA as a likely lead, with federal and Länder data protection authorities retaining a role. The Netherlands has assigned the Autoriteit Persoonsgegevens (AP) a coordinating role on AI supervision. Each authority is publishing transitional guidance and accepting voluntary submissions under the Commission's AI Pact, which is the closest current proxy for what Article 4 documentation will need to look like under formal investigation.

Industry early-mover compliance patterns inside large multinationals

A wave of large multinationals (banks, insurers, telecoms, industrial groups, professional services firms) shipped enterprise-wide AI literacy programmes through 2025 in advance of the deeper Article 26 deployer duties that arrive on 2 August 2026. Public commitments under the EU AI Pact, which the Commission published with named signatories, give a partial bench of who has committed to which measures. The common pattern is a baseline module for all staff who use any AI tool at work, a deeper module for technical roles building or operating AI systems, and a named-overseer module for the people designated under Article 14 and Article 26. Early movers are treating the literacy record as the audit-defence document and structuring their LMS evidence packs accordingly.

How RansomLeak satisfies EU AI Act training requirements

Article 4: sufficient level of AI literacy across the workforce

The dedicated EU AI Act course covers the regulation, the staged timeline, the four risk tiers, and the day-to-day practices Article 4 expects of staff and contractors operating AI systems. Each module ships as SCORM 1.2 and SCORM 2004 so completion records flow into the LMS that the AI Office or a national authority will inspect during a request. The programme is calibrated by role, which is the calibration Article 4 explicitly requires.

Article 5: prohibited AI practices awareness

The Prohibited AI Practices module trains staff to recognise the eight prohibited categories (subliminal manipulation, vulnerability exploitation, social scoring, predictive policing based on profiling, untargeted facial-image scraping, emotion recognition in workplaces and education, biometric categorisation by sensitive attributes, real-time remote biometric identification in public spaces by law enforcement) and to escalate any procurement, build, or deployment proposal that crosses the line. The €35M / 7% penalty tier sits behind these, so awareness for product, procurement, and legal teams is the single highest-impact training spend.

Article 10: data governance for AI training, validation, and testing data

The AI Data Governance module covers the Article 10 expectations on relevance, representativeness, accuracy, completeness, bias examination, and the data preparation choices that influence the system's behaviour. Engineering, data, and risk teams learn the documentation that demonstrates a defensible data governance pipeline before a high-risk system goes live. The same evidence supports the technical documentation duty under Article 11.

Article 14: meaningful human oversight for high-risk AI

The Meaningful Human Oversight module trains the people who will be designated to oversee high-risk AI systems on the capabilities and limitations of those systems, on automation bias, and on when and how to override or disregard the output. Article 14 treats meaningful oversight as a design and operational duty that requires named, trained, and empowered humans, not a checkbox. The module produces the per-individual training record that Article 26 requires deployers to maintain.

Article 26: deployer obligations for high-risk AI

The High-Risk AI Deployer Obligations module walks deployer-side staff through the assignment of human oversight, the use of input data within the provider's instructions, monitoring of operation, log retention for at least six months, worker information duties, suspension of use if the system poses a risk to health, safety, or fundamental rights, and incident reporting. Provider vs. Deployer Responsibilities clarifies which duties belong to which role across the lifecycle.

Article 27: Fundamental Rights Impact Assessment workflow

The Fundamental Rights Impact Assessment module covers the structured assessment that public-sector deployers and operators of certain high-risk systems must complete before first use. Legal, risk, product, and engineering teams learn the seven elements the assessment must address, the notification duty to the national market surveillance authority, and the link with the Article 35 GDPR Data Protection Impact Assessment where personal data is involved.

Article 50: transparency, deepfake disclosure, and synthetic content

The AI Transparency and Disclosure module covers the user-facing duties: disclose to natural persons that they are interacting with an AI, label deepfake image, audio, and video output as artificial, and mark GPAI synthetic content in a machine-readable format. Communications, marketing, product, and engineering teams learn the workflow changes that satisfy Article 50 and the carve-outs (artistic, satirical, fictional works, criminal-offence detection) that apply.

Article 99: penalty regime awareness for executives and risk owners

The EU AI Act Penalties and Enforcement module gives executives, legal, and risk owners a clear view of the three-tier penalty stack, the role of the EU AI Office and national competent authorities, the SME and start-up cap, and the factors authorities weigh when calibrating fines (gravity, duration, intent, cooperation, prior infringements). It produces the executive briefing record most boards now ask for ahead of internal audit and risk committee reviews.

How RansomLeak builds an audit-ready EU AI Act program

RansomLeak ships a dedicated EU AI Act Compliance course built around 16 interactive 3D exercises that cover the full regulation surface: the Article 4 baseline (AI Literacy Essentials, Using AI Tools Responsibly at Work, Safe GenAI Usage, Sensitive Data Exposure Through AI), the risk classification work (AI Risk Classification, Provider vs. Deployer Responsibilities, Prohibited AI Practices, High-Risk AI Deployer Obligations), the data and oversight duties (AI Data Governance, AI Bias and Discrimination, Meaningful Human Oversight, AI and Data Protection), the transparency stack (AI Transparency and Disclosure, General-Purpose AI Model Obligations), and the assurance layer (AI Governance in Your Organization, Fundamental Rights Impact Assessment, AI Incident Reporting, EU AI Act Penalties and Enforcement). Every exercise drops the learner into a realistic scenario, forces a decision under realistic pressure, and ends with feedback that names the article, the obligation, and the practical next step.

Programs are scoped by role rather than blasted to all-staff at one depth, which is the calibration Article 4 explicitly requires. Developers and ML engineers get the data governance, bias, and provider-side modules. Deployer-side operators (the analyst running the credit model, the recruiter using the shortlister, the radiologist using the triage tool) get the deployer obligations, human oversight, and incident reporting modules. Legal, risk, and compliance teams take the Fundamental Rights Impact Assessment, Penalties and Enforcement, and Governance modules. Executives and the board take the briefing track on penalty exposure and named-authority enforcement posture.

Every completion produces an audit-ready record: the workforce member, the role, the date, the module version, the assessment result, and the SCORM completion status. The evidence pack is structured so a national competent authority or the EU AI Office can be sent the per-individual record for the named overseers and the per-cohort summary for the wider workforce. The course refreshes as the regulation moves: when the AI Office publishes new guidance, when a national authority releases supervisory expectations, and when the staged applicability dates pull new obligations into force on 2 August 2026 and 2 August 2027.

What is EU AI Act training and when did it become mandatory?

EU AI Act training is the workforce duty in Article 4 of Regulation (EU) 2024/1689 that requires every provider and deployer of AI systems to ensure a sufficient level of AI literacy among staff and contractors operating those systems on their behalf. The obligation took effect on 2 February 2025 and applies regardless of whether the AI systems are high-risk, limited-risk, or minimal-risk.

The wider regulation entered into force on 1 August 2024 and applies in stages. Prohibited AI practices (Article 5) and the Article 4 literacy duty became applicable on 2 February 2025, GPAI rules and the penalty regime on 2 August 2025, high-risk obligations on 2 August 2026, and full applicability for Annex I products on 2 August 2027. Penalties under Article 99 reach €35 million or 7% of global annual turnover for prohibited-practice breaches.

The EU AI Office (inside DG CNECT) and national competent authorities supervise compliance. The AI Office published Article 4 guidance and a Living Repository in early 2025 explaining that compliance demands a documented programme calibrated to role, not a single video. Training is the cheapest first step toward overall AI Act readiness because the records it produces also feed Articles 14, 26, 27, and 50.

Recommended exercises

Scenario-based simulations that satisfy this framework.

AI Literacy Essentials

The baseline Article 4 module: what AI is, how the systems used at work behave, where they can fail, and the day-to-day practices that satisfy the literacy duty for every staff member and contractor.

Try the exercise

Prohibited AI Practices

Trains staff to recognise the eight Article 5 prohibitions and to escalate any procurement, build, or deployment proposal that crosses the line, ahead of the €35M / 7% turnover penalty tier.

Try the exercise

AI Risk Classification

Walks legal, risk, product, and engineering teams through prohibited, high-risk (Annex III), limited-risk, and minimal-risk classes, and the obligations each tier triggers under the EU AI Act.

Try the exercise

Provider vs. Deployer Responsibilities

Clarifies which duties belong to providers under Articles 9 to 22 and which belong to deployers under Article 26, the most common source of contractual confusion in AI vendor relationships.

Try the exercise

Meaningful Human Oversight

The Article 14 module: trains named overseers of high-risk AI systems on capabilities, limitations, automation bias, and the conditions under which to override or disregard the output.

Try the exercise

High-Risk AI Deployer Obligations

Walks deployer-side staff through Article 26 (oversight assignment, input data within instructions, monitoring, log retention, worker information, suspension on risk, incident reporting) ahead of the 2 August 2026 effective date.

Try the exercise

Fundamental Rights Impact Assessment

Covers the Article 27 structured assessment that public-sector deployers and operators of certain high-risk systems must complete and notify to the national market surveillance authority before first use.

Try the exercise

AI Transparency and Disclosure

The Article 50 module: chatbot disclosure, emotion recognition and biometric categorisation notice, deepfake labelling, and the machine-readable marking duty for GPAI synthetic output.

Try the exercise

Frequently Asked Questions

What GRC and security leaders ask about this framework.

What is EU AI Act Article 4 AI literacy?

Article 4 of Regulation (EU) 2024/1689 requires providers and deployers of AI systems to take measures to ensure, to their best extent, a sufficient level of AI literacy among their staff and any other person dealing with the operation and use of AI systems on their behalf. The literacy must be calibrated to the technical knowledge, experience, education, and training of the people involved, the context the systems are used in, and the persons affected.

The duty took effect on 2 February 2025 and applies to every provider and deployer regardless of risk tier. The EU AI Office published a Living Repository and a public Q&A in early 2025 clarifying that a single onboarding video does not satisfy Article 4 and that organisations should keep documentation of the literacy measures they put in place.

Does the EU AI Act require AI training for all employees?

Article 4 applies to staff and any other person dealing with the operation and use of AI systems on the entity's behalf. In practice that means anyone who uses an AI tool as part of their job, including office productivity assistants, customer-service copilots, code assistants, marketing generators, and any embedded AI in line-of-business software. Article 4 does not require identical depth for everyone; it requires calibration to role.

For most enterprises the practical answer is a baseline literacy module for all staff who touch any AI tool at work, with deeper role-specific modules for developers, deployer-side operators, named overseers under Articles 14 and 26, legal and risk teams, and executives. Article 26 separately requires deployers of high-risk AI systems to ensure that natural persons assigned to oversight have the necessary competence, training, authority, and support.

When does AI literacy training become mandatory under the EU AI Act?

The Article 4 AI literacy duty became applicable on 2 February 2025, six months after the regulation entered into force on 1 August 2024. The same date triggered the prohibition of the eight banned AI practices in Article 5. Both duties are live and enforceable now.

The other applicability milestones are 2 August 2025 (GPAI model rules, governance chapter, penalty regime, national competent authorities), 2 August 2026 (high-risk AI obligations for the Annex III domains and most remaining provisions), and 2 August 2027 (full applicability for high-risk AI embedded in regulated products under Annex I).

What counts as a sufficient level of AI literacy?

The regulation does not define "sufficient" against a single benchmark. The EU AI Office Q&A on Article 4 explains that the assessment is contextual: the literacy must match the technical knowledge, experience, education, and training of the people involved, the way the AI systems are used, and the people affected by those systems. A claims handler using an AI triage tool needs different literacy from the data scientist who built it.

In practice authorities will look for a documented programme rather than a single training event, calibrated by role, refreshed when the regulation or the AI systems change, and supported by evidence that named individuals have completed the modules relevant to their job. The Commission's AI Pact and the Living Repository on AI Literacy give a public bench of programme designs that the AI Office considers acceptable.

Who needs Article 14 human-oversight training under the EU AI Act?

Article 14 applies to providers of high-risk AI systems and requires that those systems be designed and developed so they can be effectively overseen by natural persons during the period in which the AI system is in use. Article 26 puts the operating side of that duty on deployers of high-risk AI systems: assign oversight to natural persons who have the necessary competence, training, authority, and support to do the job.

Practically, every named overseer of a high-risk AI system needs targeted training on the system's capabilities and limitations, on automation bias, on the conditions under which to override or disregard the output, and on the escalation path. The training record for those individuals is the document a national competent authority is most likely to ask for first during an investigation.

What are the EU AI Act penalties?

Article 99 sets a three-tier penalty stack. Breaches of the prohibited AI practices in Article 5 carry the highest tier: up to €35 million or 7% of global annual turnover for the previous financial year, whichever is higher. Non-compliance with most other obligations (data governance, transparency, deployer duties, GPAI provider duties, Article 4 literacy where it applies) carries up to €15 million or 3% of global annual turnover.

Supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities carries up to €7.5 million or 1.5% of global annual turnover. SMEs and start-ups face the lower of the two values rather than the higher. National authorities and the EU AI Office consider the gravity, duration, intentional or negligent character, cooperation, and prior infringements when calibrating fines.

Does the EU AI Act apply to US companies?

Yes, the regulation has extraterritorial reach. It applies to providers placing AI systems on the EU market or putting them into service in the Union, regardless of where the provider is established. It applies to deployers established in the EU. It also applies to providers and deployers established outside the EU when the output produced by the AI system is used in the Union.

That last hook captures a large slice of the global AI economy: a US-based SaaS vendor whose AI feature is used by a French customer is in scope, and a US-based deployer whose AI output is delivered to or relied on inside the EU is in scope. Non-EU providers must designate an authorised representative in the Union before placing high-risk AI systems on the market.

What is the timeline for full EU AI Act applicability?

The regulation entered into force on 1 August 2024 and applies in stages. On 2 February 2025 the prohibited practices in Article 5 and the Article 4 AI literacy duty became applicable. On 2 August 2025 the General-Purpose AI model rules, the governance chapter (including the EU AI Office and national competent authorities), and the Article 99 penalty regime became applicable.

On 2 August 2026 the bulk of the high-risk AI obligations and most remaining provisions become applicable. On 2 August 2027 the regulation is fully applicable to high-risk AI systems embedded in products covered by the Union harmonisation legislation listed in Annex I. Organisations should sequence their compliance programme against these dates, with Article 4 literacy and prohibited-practice screening as the immediate priorities.

Sources & further reading

Primary sources cited above and adjacent guidance.

Make This Framework Audit-Ready

Book a 30-minute walkthrough. We will scope the exercise sequence, the assignment logic, and the evidence export your auditor expects.