Reg. (EU) 2024/1689 OJ L, 12.7.2024

Ariadne
EU AI Act
Compliance Reference

Threading the labyrinth of Regulation (EU) 2024/1689 — for Providers & Deployers

A complete interactive reference tool for organisations building, placing, or putting AI systems into service under Regulation (EU) 2024/1689 — covering all risk tiers, applicability dates, and role-specific obligations.

Legal BasisArt. 114 TFEU
Entry into Force1 August 2024
ApplicationPhased 2025–2027
Recitals180
Articles113
AnnexesXIII
🏗️
Provider
Develops, trains, or places an AI system on the market (Art. 3(3)). Responsible for design, documentation, conformity assessment, CE marking.
🔌
Deployer
Uses an AI system under its own authority in a professional context (Art. 3(4)). Responsible for human oversight, data quality, FRIA, and incident reporting.
Both / Unsure
See all obligations. Relevant when an organisation both develops AI internally and deploys it — or when role characterisation is unclear.
Filtering active

Used to organize compliance assessments and include in exported reports.

Risk Tiers at a Glance

Click any tier to jump to its obligations — or use the Obligations tab for the full breakdown.

Prohibited Practices Art. 5

8 categories of AI applications that are flatly banned. No exceptions, no transitions. Applies to both providers and deployers from 2 February 2025.

High Risk Arts. 6, 9–21, 26

AI systems in Annex III sectors or safety components of Annex I products. Heavy obligations for providers (conformity) and deployers (oversight, FRIA, logs).

Limited Risk / Transparency Art. 50

AI systems interacting with persons (chatbots), generating synthetic content, or used for emotion/biometric recognition. Disclosure obligations apply.

Minimal Risk + GPAI Arts. 4, 51–56

All AI systems must comply with Art. 4 AI literacy. GPAI models (LLMs, foundation models) carry their own obligations for providers. Deployers using GPAI tools see limited additional duties.


PROVIDER — Art. 3(3)
‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;
DEPLOYER — Art. 3(4)
‘deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity;
AI SYSTEM — Art. 3(1)
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;
GPAI MODEL — Art. 3(63)
‘general-purpose AI model’ means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market;

Obligations by Risk Tier

All obligations under Reg. (EU) 2024/1689, organised by risk tier and role. Click any card to expand. Click article numbers to view text.

Baseline — Applies to All Always
AI Literacy — training and awareness for all staff
Both

Providers and deployers shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and all other persons dealing with the operation and use of AI systems on their behalf. This includes technical knowledge, sectoral expertise, and context of use.

  • Map all staff roles that interact with, operate, or oversee AI systems
  • Assess existing AI literacy baseline per role
  • Design and deliver role-appropriate training programmes
  • Document training measures and review annually
  • No prescribed format — but documented evidence strongly advised for audit readiness
Prohibited Practices Art. 5 · Banned outright
8 prohibited AI practices — overview
Both

Art. 5 establishes an absolute prohibition on placing on the market, putting into service, or using the following AI systems. No exceptions apply once the prohibition is triggered. Effective from 2 February 2025.

  • 5(1)(a)Subliminal techniques beyond a person's consciousness or purposefully manipulative / deceptive techniques that distort behaviour causing harm
  • 5(1)(b)Exploitation of vulnerabilities due to age, disability, or socio-economic situation causing harm
  • 5(1)(c)Social scoring by public or private actors based on social behaviour or personal characteristics, where the score leads to detrimental treatment in unrelated contexts
  • 5(1)(d)Risk assessment of natural persons to predict criminal offending based solely on profiling or personality traits, without objective verifiable facts already linked to criminal activity
  • 5(1)(e)Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage
  • 5(1)(f)Inferring emotions of natural persons in the workplace or educational institutions (except for medical or safety purposes)
  • 5(1)(g)Biometric categorisation systems that categorise persons based on biometric data to deduce race, political opinions, trade-union membership, religious/philosophical beliefs, sex life or sexual orientation
  • 5(1)(h)Real-time remote biometric identification in publicly accessible spaces for law enforcement, except in the narrow circumstances set out in Art. 5(1)(h)(i)-(iii)
High-Risk AI Systems Arts. 6, 9–21, 26, 43, 47–49 · Applies Aug 2026 / Aug 2027
Risk management system — continuous lifecycle process
Provider

Providers must establish, implement, document, and maintain a risk management system throughout the entire lifecycle of the high-risk AI system. It must be iterative and continuously updated throughout the system's lifecycle.

  • Identify and analyse all known and foreseeable risks to health, safety, or fundamental rights (Art. 9(2)(a))
  • Estimate and evaluate risks that may emerge when the system is used as intended or under foreseeable misuse (Art. 9(2)(b))
  • Adopt risk management measures (Art. 9(4)): eliminate or reduce by design; implement adequate control measures; provide user information
  • Test the system for effectiveness of risk management (Art. 9(7)) — including with representative persons prior to placing on market
  • For AI used by consumers: particular attention to children's rights and accessibility (Art. 9(5))
Training data governance — quality, relevance, bias
Provider

High-risk AI systems that use data techniques must be trained/tested/validated on data sets of appropriate quality for their intended purpose. Providers bear the core data governance obligation.

  • Data sets must be relevant, sufficiently representative, free of errors, and complete for the system's purpose
  • Apply appropriate data governance practices (design choices, data collection, labelling, cleaning, enrichment)
  • Examine for possible biases — especially where outputs relate to persons
  • Special categories of personal data may be processed only under Art. 10(5) conditions (bias detection, with GDPR-compliant safeguards)
Technical documentation (Annex IV) — draw up before placing on market
Provider

Providers must draw up technical documentation in accordance with Annex IV before placing the high-risk AI system on the market or putting it into service. Documentation must be kept up to date.

  • General description: intended purpose, version, hardware requirements
  • Detailed description of elements and development process (Annex IV §2)
  • Performance metrics, known limitations, risk management measures
  • Design specifications: data requirements, architectural choices, training methodology
  • Retain documentation for minimum 10 years after placement on market (Art. 18)
Automatic logging — system-level record-keeping capability
Provider

Providers must ensure that high-risk AI systems have automatic logging capabilities that allow for ex post monitoring. This is a design obligation: logging must be built-in, not retrofitted.

  • Log files must record events throughout the system's operational lifetime
  • At minimum, log: operational data, inputs, changes to system, results
  • Logs must be sufficiently granular to enable traceability and post-hoc review
  • Deployers are responsible for retaining the logs that the provider's logging capability generates (Art. 26(6): ≥6 months for high-risk)
Transparency — instructions for use to deployers
Provider

Providers must ensure high-risk AI systems are designed and developed to be sufficiently transparent, enabling deployers to interpret the system's output and use it appropriately. This includes a written instructions-for-use document.

  • Provide instructions for use including identity/contact of provider, system capabilities and limitations
  • Include performance on specific groups, known risks, human oversight measures
  • Specify the intended purpose, foreseeable misuse, and changes that may affect performance
  • Include hardware/software requirements and EU database registration number (Art. 49)
Human oversight — design for meaningful human control
Provider

Providers must design and build high-risk AI systems so that they can be effectively overseen by humans. The system itself must facilitate, not hinder, oversight. This is a design obligation distinct from the deployer's operational oversight duty.

  • Build in ability for persons to fully understand capacities and limitations
  • Enable monitoring for anomalies, dysfunctions, unexpected performance
  • Build in ability to interrupt, halt, or override the system (stop button / fallback)
  • For systems monitoring persons: build in ability for individuals to decide not to be exposed (Art. 14(4)(e))
Accuracy, robustness, cybersecurity — technical performance standards
Provider

High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle, with particular attention to the risks of adversarial attacks, data poisoning, model evasion, and confidentiality breaches.

  • State accuracy metrics and levels in technical documentation and instructions for use
  • Resilience to errors, faults, inconsistencies — including fallback plans
  • Where continuous learning: address risks of biased or adversarial feedback influencing outputs
  • Meet cybersecurity standards proportionate to risks (including ENISA guidance where available)
Quality management system — written policies, procedures, resources
Provider

Providers must put in place a quality management system (QMS) before placing a high-risk AI system on the market. The QMS must be documented and cover the entire product lifecycle.

  • Regulatory compliance strategy including conformity assessment procedures
  • Data management: acquisition, collection, analysis, labelling, storage, filtering, mining, aggregation, retention
  • Testing, validation, and pre-market evaluation procedures
  • Post-market monitoring and serious incident reporting plan (Art. 72/73)
  • SMEs: lighter-weight documentation permitted provided substance is equivalent
Conformity assessment — third-party or self-assessment
Provider

Before placing on the market, providers must carry out a conformity assessment. The procedure depends on the category. For Annex III systems (biometrics, employment, education, etc.): internal control per Annex VI is the standard route for points 2–8; biometric systems (point 1) may choose between Annex VI (internal) and Annex VII (notified body). For Annex I systems: follow the applicable sectoral legislation.

  • Annex I systems (safety components of regulated products): follow procedures in existing sectoral legislation + EU type examination
  • Annex III systems, points 2–8 (critical infrastructure, employment, education, law enforcement, etc.): self-assessment per Annex VI only; no notified body involvement
  • Annex III point 1 (biometrics): choice of Annex VI (internal control) or Annex VII (notified body); Annex VII is mandatory if harmonised standards do not exist or are not fully applied
  • Issue EU Declaration of Conformity (Art. 47)
  • Affix CE marking (Art. 48) before placing on EU market
Registration in EU database — before placing on market
Provider

Before placing a high-risk AI system on the market, providers must register it in the EU-wide public database established under Art. 71. Deployers of certain Annex III systems must also register their own use.

  • Register system with provider identity, system description, intended purpose
  • Include EU Declaration of Conformity reference
  • Update registration if system undergoes substantial modification
  • Deployers of Annex III systems used by public authorities must separately register use (Art. 49(2))
Deployer obligations — the primary deployer compliance article
Deployer

Art. 26 is the central deployer obligation for high-risk AI. It covers the full operational lifecycle: from using the system per instructions, through oversight, data quality, logs, worker notification, and incident reporting. The FRIA obligation is a standalone requirement under Art. 27, separate from Art. 26. It applies to public law bodies, private entities providing public services, and deployers using systems listed in Annex III pts. 5(b)/(c). Art. 26(9) is distinct; it concerns using Art. 13 provider information to satisfy DPIA obligations under GDPR Art. 35.

  • 26(1)Use the system in accordance with the provider's instructions for use
  • 26(2)Assign human oversight to natural persons with necessary competence, training, and authority
  • 26(3)Ensure input data is relevant and sufficiently representative for the system's intended purpose
  • 26(5)Monitor operation; inform provider per Art. 72 if the system presents a risk; immediately notify provider (then importer/distributor + market surveillance) of serious incidents
  • 26(6)Retain automatically generated logs for at least 6 months (where technically feasible and not prohibited by applicable law)
  • 26(7)Inform workers and their representatives before deploying AI that monitors or evaluates them
  • 26(8)Public authority deployers: verify system is registered in the EU database (Art. 71) before use
  • Art. 27FRIA: Public law bodies + private entities providing public services + Annex III pts. 5(b)/(c) deployers: conduct Fundamental Rights Impact Assessment before deployment
Fundamental rights impact assessment (FRIA) — before deployment
Deployer

Before deploying a high-risk AI system, certain deployers must assess the impact on fundamental rights. This applies to: public law bodies; private entities providing public services; and deployers of Annex III pts. 5(b) (credit scoring) and 5(c) (life/health insurance pricing). It does not apply to critical infrastructure (Annex III pt. 2).

  • 27(1)(a)Describe the deployer's processes in which the system will be used per its intended purpose
  • 27(1)(b)Describe the period and frequency of intended use
  • 27(1)(c)Identify categories of natural persons and groups likely to be affected
  • 27(1)(d)Identify specific risks of harm to those persons or groups (using Art. 13 provider information)
  • 27(1)(e)Describe human oversight measures in accordance with the instructions for use
  • 27(1)(f)Set out measures to be taken if those risks materialise, including internal governance and complaint mechanisms
  • 27(3)Notify the market surveillance authority of results using the AI Office template
  • If a GDPR Art. 35 DPIA has been conducted, the FRIA complements it — it does not replace it (Art. 27(4))
  • Applies to the first use of the system; can rely on prior FRIAs for similar use cases (Art. 27(2))
Serious incident reporting — deployer reporting channel
Deployer

Where a deployer identifies a serious incident (defined in Art. 3(49) as one resulting in death or serious harm to health, critical infrastructure disruption, fundamental rights infringement, or serious harm to property or the environment), the deployer must notify the relevant national competent authority without undue delay.

  • Immediately notify provider first; then notify importer/distributor and relevant market surveillance authorities (Art. 26(5) / Art. 73)
  • 73(1)General deadline: without undue delay, and in any event within 15 days of becoming aware
  • Shortened: 10 days if the incident may have caused death (Art. 73(4))
  • Emergency: 2 days for widespread infringements or serious/irreversible critical infrastructure disruption (Art. 73(3))
  • Initial incomplete report permitted under Art. 73(5); supplemental information may follow
  • Cooperate with post-incident investigation and market surveillance authorities
Limited Risk — Transparency Obligations Art. 50 · Applies Aug 2026
Transparency obligations — chatbots, deepfakes, synthetic content
Both

Art. 50 imposes transparency disclosure duties on providers and deployers of AI systems that interact with humans, generate synthetic audio/video/image/text, or perform emotional/biometric recognition.

  • 50(1)Deployers of chatbots must inform persons they are interacting with an AI system — unless obvious from context
  • 50(2)Deployers of emotion recognition or biometric categorisation: inform the persons exposed in advance
  • 50(3)Providers of AI generating synthetic content: ensure outputs are machine-readable and detectable as AI-generated (watermarking / metadata)
  • 50(4)Deployers using AI for synthetic content: disclose that content has been AI-generated — unless it has undergone human review or is clearly artistic
  • Disclosure must be timely, clear, and in a format accessible to the person receiving the AI output
Right to explanation — individual decisions via high-risk AI
Deployer

Any person subject to a decision made by a deployer using a high-risk AI system that produces legal or similarly significant effects on that person has the right to request an explanation of the principal reasons for that decision.

  • Deployers must be able to provide meaningful explanations of AI-assisted decisions affecting individuals
  • Explanation must cover the main parameters and logic of the decision
  • Does not apply where the decision is covered by existing rights under GDPR Art. 22 (automated decision-making)
  • Overlaps significantly with GDPR Art. 22 — consider unified approach to explainability
GPAI Models Arts. 51–56 · Applies Aug 2025
GPAI provider obligations — documentation, copyright, transparency
Provider

All GPAI model providers (e.g. LLM developers, multimodal model developers) must comply with Art. 53, regardless of whether their model carries systemic risk. These obligations apply from 2 August 2025.

  • 53(1)(a)Draw up and maintain technical documentation (Annex XI) and make it available to the AI Office upon request
  • 53(1)(b)Provide information and documentation to downstream providers integrating the GPAI model into their systems
  • 53(1)(c)Put in place a policy to comply with EU copyright law — including the Text and Data Mining exception (Dir. 2019/790, Art. 4)
  • 53(1)(d)Publish a sufficiently detailed summary of training data (Annex XII) for public access
  • Open-source GPAI models: exemptions apply for 53(1)(a)–(b) unless systemic risk is identified (Art. 53(2))
Systemic risk — additional duties for large-scale GPAI providers
Provider

GPAI models with systemic risk (presumed where training compute ≥ 10²⁵ FLOPs per Art. 51(2)) carry enhanced obligations under Art. 55: adversarial testing and systemic risk assessment (a)-(b), serious incident reporting (c), and cybersecurity protection (d).

  • 55(1)(a)Perform and document adversarial testing (red-teaming) before and after release
  • 55(1)(b)Assess and mitigate possible systemic risks at Union level, including their sources, arising from development, market placement, or use of the model
  • 55(1)(c)Track, document, and report without undue delay to the AI Office (and where relevant national authorities) relevant information about serious incidents and corrective measures
  • 55(1)(d)Ensure adequate cybersecurity protection for the GPAI model and its physical infrastructure
  • Threshold (10²⁵ FLOPs) may be revised by Commission via delegated act — monitor AI Office communications
GPAI context for deployers — AI literacy and due diligence
Deployer

Deployers using GPAI-powered tools (e.g. off-the-shelf LLM services, copilots, AI writing tools) are not directly regulated by Arts. 51–56: those apply to the GPAI model provider. However, deployers must understand what they are using under Art. 4 literacy obligations, and remain subject to all applicable high-risk and transparency rules where the GPAI system is integrated into a regulated use case.

  • Understand whether the AI tool you deploy is built on a GPAI model
  • If the use case falls in Annex III (e.g. employment screening via LLM), high-risk deployer obligations apply fully
  • Review provider's instructions for use and technical documentation provided under Art. 53(1)(b)
  • If you substantially modify a GPAI-based system, you may become a provider under Art. 25

Risk Classification Tool

Answer the questions below to determine which risk tier applies to your AI system and which obligations you face. Results include applicable articles and a direct link to the compliance checklist.

Question 1 of 9 0% complete
Before we start — what is your organisation doing with this AI system?

Select the description that fits best. If you do both, pick the one most relevant to what you want to assess right now.

We build or develop AIYou train, fine-tune, or assemble an AI system and put it on the market or into service under your own name. Examples: a startup shipping an AI-powered HR screening tool; a bank building a proprietary credit-scoring model; an AI lab releasing an LLM.
We use AI built by someone elseYou buy, license, or access an AI system and use it in your own operations. Examples: an HR department using an off-the-shelf CV screening tool; a hospital deploying a vendor's diagnostic AI; a retailer using a third-party chatbot.
We build AI for our own useYour organisation both develops and deploys the same AI system internally. Examples: a large bank that builds and uses its own fraud-detection model; a hospital that develops and runs its own diagnostic AI internally.
Skip — I already know my roleGo straight to the risk classification questions.

Art. 3(3) defines provider; Art. 3(4) defines deployer — Regulation (EU) 2024/1689

What is your organisation's primary role with respect to this AI system?
If you both develop and deploy AI internally, select "Both". Role affects which obligations apply.
ProviderWe develop, train, or place the AI system on the market under our name or brand
DeployerWe use a third-party AI system under our own authority in a professional context
Both / HybridWe develop AI internally and also deploy it within our own organisation or to clients
Does the AI system perform any of the following?
Any "yes" triggers the prohibition in Art. 5. If none apply, proceed to assess risk tier.
Yes — one or more prohibited practicesSubliminal manipulation, social scoring, real-time RBI in public spaces, emotion recognition in workplaces/schools, biometric categorisation for sensitive attributes, facial scraping
No — none of the aboveThe system does not fall within Art. 5 prohibited categories
Is the AI system a General-Purpose AI (GPAI) model?
GPAI models are trained at scale on broad data and can perform a wide range of tasks — e.g. LLMs, multimodal foundation models. If you are deploying a tool built on a GPAI model (like an off-the-shelf LLM service), select "No — I use a GPAI-based tool but don't provide the GPAI model."
Yes — we provide a GPAI modelWe train or develop the underlying foundation/LLM model itself
We use a GPAI-based tool (downstream deployer)We deploy a product built on top of a GPAI model (e.g. OpenAI API, Claude API, Azure OpenAI)
No — not GPAIThe AI system is purpose-built and not a general-purpose model
Does the AI system fall within one of the high-risk categories in Annex III?
Annex III lists 8 sectors where AI systems used for certain purposes are deemed high-risk.
Biometric identification or categorisation of personsRemote biometric identification, categorisation, or emotion recognition (excl. purely personal use)
Critical infrastructure managementDigital or physical infrastructure (water, energy, transport, etc.) safety components
Education / Employment / Social services / Law enforcement / Justice / DemocracyIncludes CV screening, credit scoring, benefit eligibility, border control, administration of justice
None of the aboveThe system is not used in an Annex III context
Does the AI system interact directly with humans, generate synthetic content, or perform emotional/biometric recognition?
If yes, Art. 50 transparency obligations apply regardless of risk tier. This catches chatbots, deepfake generators, AI-written content tools, and more.
Yes — chatbot or conversational AIThe system interacts with persons using natural language or voice
Yes — generates synthetic audio, video, images or textIncluding deepfakes, AI-written articles, synthetic voices, AI-generated images
Yes — emotion recognition or biometric categorisationOutputs relate to emotional state or category of a natural person
No — none of the aboveSystem operates in the background or on non-personal data
Is the deploying organisation a public authority, or does it operate in credit scoring, insurance, education (incl. student assessment), or employment contexts under Annex III?
If yes, a Fundamental Rights Impact Assessment (FRIA) is required before deployment (Art. 27).
Yes — public authority or Annex III sensitive sectorFRIA required before deploying a high-risk AI system
No — private entity outside sensitive sectorsFRIA not mandated (though recommended as best practice)
For GPAI providers: does your model's training require ≥ 10²⁵ FLOPs, or has the AI Office designated it as having systemic risk?
This threshold triggers the enhanced Art. 55 obligations including red-teaming, incident reporting, and energy transparency. If you are not a GPAI provider, select N/A.
Yes — at or above the systemic risk thresholdTraining compute ≥ 10²⁵ FLOPs, or AI Office designation
No — below thresholdStandard GPAI obligations under Art. 53 only
N/A — not a GPAI providerSkip to result

Compliance Checklist

Track your progress against key AI Act obligations. Check items as you complete them. Filtered by role below.

0Completed
0Total Items
0%Progress

Implementation Timeline

Key dates for the AI Act's phased applicability. Today is loading… — you are in the run-up to general applicability (2 Aug 2026). Annex I product obligations apply from 2 Aug 2027.

⚠ LEGISLATIVE ALERT: Digital Omnibus Proposal (19 November 2025)

The European Commission has proposed the Digital Omnibus package, which — if adopted — would significantly delay the applicability of high-risk AI obligations under Annex III and Annex I:

  • Annex III systems (employment, biometrics, credit, law enforcement, etc.): delayed from 2 Aug 2026 to 6 months after Commission confirms standards are available — backstop date 2 Dec 2027
  • Annex I systems (safety components of regulated products): delayed from 2 Aug 2027 to 12 months after standard confirmation — backstop date 2 Aug 2028
  • Art. 50(2) machine-readable marking: 6-month grace period for systems already on market before 2 Aug 2026 (until 2 Feb 2027)
  • Art. 4 AI literacy: proposed shift from direct obligation on providers/deployers to "encouragement" by Commission and Member States only

This is a proposal only — NOT yet law. It requires approval from both the European Parliament and the Council. Formal adoption is expected later in 2026. If not adopted before August 2026, all original deadlines remain binding. Prudent compliance planning should treat 2 August 2026 as the operative deadline. Source: COM(2025) Digital Omnibus, 19 Nov 2025.

TODAY — — YOU ARE HERE
12 JULY 2024
Publication in Official Journal

Reg. (EU) 2024/1689 published in OJ L of 12 July 2024. The regulation entered into force on the 20th day following publication.

Art. 113 — Entry into force Done
1 AUGUST 2024
Entry into Force

The regulation entered into force. No obligations apply yet — applicability is phased. The AI Office was established under the Commission during this period.

Art. 113(1) Done Both roles
2 FEBRUARY 2025
Prohibited Practices + AI Literacy Applicable

Chapter I (definitions), Chapter II (prohibited practices under Art. 5), and Art. 4 (AI literacy) became applicable 6 months after entry into force. Any AI system falling under Art. 5 must have been withdrawn or adapted before this date. AI literacy measures must be in place.

Art. 5 — Prohibited Art. 4 — AI Literacy Chapter I Definitions NOW APPLICABLE Both roles
2 AUGUST 2025
GPAI Chapter Applicable — Codes of Practice

Chapter V (Arts. 51–56) on GPAI models became applicable. GPAI model providers must comply with Art. 53 (documentation, copyright policy, training data summary). Systemic-risk providers must additionally comply with Art. 55. The AI Office is developing codes of practice for GPAI compliance — participation may be used as compliance evidence.

Arts. 51–56 — GPAI Art. 53 — Provider duties Art. 55 — Systemic risk RECENTLY APPLICABLE Provider (GPAI)
NOW
You Are Here — Preparation Phase

High-risk AI system providers should be completing conformity assessments, QMS documentation, technical documentation, and EU database registration. Deployers should be putting in place FRIA frameworks, human oversight policies, and log retention procedures. Note: Annex I product obligations apply from 2 August 2027.

COUNTDOWN TO AUG 2026 Both roles — prepare now
2 AUGUST 2026
General Applicability — High-Risk AI (Annex III), Transparency, Market Surveillance

The majority of the AI Act's obligations become applicable. This covers: all high-risk AI systems listed in Annex III (employment, education, biometrics, critical infrastructure safety, social benefits, law enforcement, justice, democracy); Art. 50 transparency; Art. 73 incident reporting; Art. 86 right to explanation; Art. 26 deployer obligations; conformity assessments, CE marking, and EU database registration for providers. This is the main compliance deadline for most organisations.

Arts. 9–21 — Provider high-risk Art. 26 — Deployer duties Art. 43 — Conformity Art. 47–49 — DoC, CE, Registration Art. 50 — Transparency Art. 73 — Incidents Art. 86 — Explanation UPCOMING Both roles
2 AUGUST 2027
Annex I Products (Art. 6(1)) + GPAI Legacy Models (Art. 111(3))

Two separate deadlines fall on 2 August 2027. First, Art. 6(1) high-risk systems that are safety components of products already subject to EU harmonisation legislation in Annex I (machinery, medical devices, railway, aviation, vehicles, toys, etc.) become subject to AI Act obligations — 36 months after entry into force. These products were already subject to sectoral conformity procedures; the AI Act adds an additional layer. Second, under Art. 111(3), GPAI model providers whose models were placed on the market before 2 August 2025 must take the necessary steps to comply with GPAI obligations (Arts. 53 and, where applicable, 55) by this date.

Art. 6(1) — Annex I products Art. 111(3) — GPAI legacy models FUTURE Provider (Annex I + GPAI)
31 DEC 2030 / 2 AUG 2030
Legacy Systems Transitional Regime — Arts. 111(1)–(2)

Two transitional regimes remain. Art. 111(1): AI system components embedded in Annex X large-scale EU IT systems (e.g. SIS, Eurodac, VIS) that were placed on the market before 2 August 2027 must comply with the AI Act by 31 December 2030. Art. 111(2): all other high-risk AI systems placed on the market or put into service before 2 August 2026 are only subject to the Regulation if they undergo significant changes in their design; however, providers and deployers of systems intended for use by public authorities must take the necessary steps to comply by 2 August 2030 regardless of design changes. Note: Art. 111(3) (GPAI legacy models) has an earlier deadline of 2 August 2027 — see that entry above.

Arts. 111(1)–(2) — Legacy transitional FUTURE Provider + Deployer (public authority)