Threading the labyrinth of Regulation (EU) 2024/1689 — for Providers & Deployers
A complete interactive reference tool for organisations building, placing, or putting AI systems into service under Regulation (EU) 2024/1689 — covering all risk tiers, applicability dates, and role-specific obligations.
Used to organize compliance assessments and include in exported reports.
Click any tier to jump to its obligations — or use the Obligations tab for the full breakdown.
8 categories of AI applications that are flatly banned. No exceptions, no transitions. Applies to both providers and deployers from 2 February 2025.
AI systems in Annex III sectors or safety components of Annex I products. Heavy obligations for providers (conformity) and deployers (oversight, FRIA, logs).
AI systems interacting with persons (chatbots), generating synthetic content, or used for emotion/biometric recognition. Disclosure obligations apply.
All AI systems must comply with Art. 4 AI literacy. GPAI models (LLMs, foundation models) carry their own obligations for providers. Deployers using GPAI tools see limited additional duties.
All obligations under Reg. (EU) 2024/1689, organised by risk tier and role. Click any card to expand. Click article numbers to view text.
Providers and deployers shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and all other persons dealing with the operation and use of AI systems on their behalf. This includes technical knowledge, sectoral expertise, and context of use.
Art. 5 establishes an absolute prohibition on placing on the market, putting into service, or using the following AI systems. No exceptions apply once the prohibition is triggered. Effective from 2 February 2025.
Providers must establish, implement, document, and maintain a risk management system throughout the entire lifecycle of the high-risk AI system. It must be iterative and continuously updated throughout the system's lifecycle.
High-risk AI systems that use data techniques must be trained/tested/validated on data sets of appropriate quality for their intended purpose. Providers bear the core data governance obligation.
Providers must draw up technical documentation in accordance with Annex IV before placing the high-risk AI system on the market or putting it into service. Documentation must be kept up to date.
Providers must ensure that high-risk AI systems have automatic logging capabilities that allow for ex post monitoring. This is a design obligation: logging must be built-in, not retrofitted.
Providers must ensure high-risk AI systems are designed and developed to be sufficiently transparent, enabling deployers to interpret the system's output and use it appropriately. This includes a written instructions-for-use document.
Providers must design and build high-risk AI systems so that they can be effectively overseen by humans. The system itself must facilitate, not hinder, oversight. This is a design obligation distinct from the deployer's operational oversight duty.
High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle, with particular attention to the risks of adversarial attacks, data poisoning, model evasion, and confidentiality breaches.
Providers must put in place a quality management system (QMS) before placing a high-risk AI system on the market. The QMS must be documented and cover the entire product lifecycle.
Before placing on the market, providers must carry out a conformity assessment. The procedure depends on the category. For Annex III systems (biometrics, employment, education, etc.): internal control per Annex VI is the standard route for points 2–8; biometric systems (point 1) may choose between Annex VI (internal) and Annex VII (notified body). For Annex I systems: follow the applicable sectoral legislation.
Before placing a high-risk AI system on the market, providers must register it in the EU-wide public database established under Art. 71. Deployers of certain Annex III systems must also register their own use.
Art. 26 is the central deployer obligation for high-risk AI. It covers the full operational lifecycle: from using the system per instructions, through oversight, data quality, logs, worker notification, and incident reporting. The FRIA obligation is a standalone requirement under Art. 27, separate from Art. 26. It applies to public law bodies, private entities providing public services, and deployers using systems listed in Annex III pts. 5(b)/(c). Art. 26(9) is distinct; it concerns using Art. 13 provider information to satisfy DPIA obligations under GDPR Art. 35.
Before deploying a high-risk AI system, certain deployers must assess the impact on fundamental rights. This applies to: public law bodies; private entities providing public services; and deployers of Annex III pts. 5(b) (credit scoring) and 5(c) (life/health insurance pricing). It does not apply to critical infrastructure (Annex III pt. 2).
Where a deployer identifies a serious incident (defined in Art. 3(49) as one resulting in death or serious harm to health, critical infrastructure disruption, fundamental rights infringement, or serious harm to property or the environment), the deployer must notify the relevant national competent authority without undue delay.
Art. 50 imposes transparency disclosure duties on providers and deployers of AI systems that interact with humans, generate synthetic audio/video/image/text, or perform emotional/biometric recognition.
Any person subject to a decision made by a deployer using a high-risk AI system that produces legal or similarly significant effects on that person has the right to request an explanation of the principal reasons for that decision.
All GPAI model providers (e.g. LLM developers, multimodal model developers) must comply with Art. 53, regardless of whether their model carries systemic risk. These obligations apply from 2 August 2025.
GPAI models with systemic risk (presumed where training compute ≥ 10²⁵ FLOPs per Art. 51(2)) carry enhanced obligations under Art. 55: adversarial testing and systemic risk assessment (a)-(b), serious incident reporting (c), and cybersecurity protection (d).
Deployers using GPAI-powered tools (e.g. off-the-shelf LLM services, copilots, AI writing tools) are not directly regulated by Arts. 51–56: those apply to the GPAI model provider. However, deployers must understand what they are using under Art. 4 literacy obligations, and remain subject to all applicable high-risk and transparency rules where the GPAI system is integrated into a regulated use case.
Answer the questions below to determine which risk tier applies to your AI system and which obligations you face. Results include applicable articles and a direct link to the compliance checklist.
Select the description that fits best. If you do both, pick the one most relevant to what you want to assess right now.
Art. 3(3) defines provider; Art. 3(4) defines deployer — Regulation (EU) 2024/1689
Track your progress against key AI Act obligations. Check items as you complete them. Filtered by role below.
Key dates for the AI Act's phased applicability. Today is loading… — you are in the run-up to general applicability (2 Aug 2026). Annex I product obligations apply from 2 Aug 2027.
The European Commission has proposed the Digital Omnibus package, which — if adopted — would significantly delay the applicability of high-risk AI obligations under Annex III and Annex I:
This is a proposal only — NOT yet law. It requires approval from both the European Parliament and the Council. Formal adoption is expected later in 2026. If not adopted before August 2026, all original deadlines remain binding. Prudent compliance planning should treat 2 August 2026 as the operative deadline. Source: COM(2025) Digital Omnibus, 19 Nov 2025.
Reg. (EU) 2024/1689 published in OJ L of 12 July 2024. The regulation entered into force on the 20th day following publication.
The regulation entered into force. No obligations apply yet — applicability is phased. The AI Office was established under the Commission during this period.
Chapter I (definitions), Chapter II (prohibited practices under Art. 5), and Art. 4 (AI literacy) became applicable 6 months after entry into force. Any AI system falling under Art. 5 must have been withdrawn or adapted before this date. AI literacy measures must be in place.
Chapter V (Arts. 51–56) on GPAI models became applicable. GPAI model providers must comply with Art. 53 (documentation, copyright policy, training data summary). Systemic-risk providers must additionally comply with Art. 55. The AI Office is developing codes of practice for GPAI compliance — participation may be used as compliance evidence.
High-risk AI system providers should be completing conformity assessments, QMS documentation, technical documentation, and EU database registration. Deployers should be putting in place FRIA frameworks, human oversight policies, and log retention procedures. Note: Annex I product obligations apply from 2 August 2027.
The majority of the AI Act's obligations become applicable. This covers: all high-risk AI systems listed in Annex III (employment, education, biometrics, critical infrastructure safety, social benefits, law enforcement, justice, democracy); Art. 50 transparency; Art. 73 incident reporting; Art. 86 right to explanation; Art. 26 deployer obligations; conformity assessments, CE marking, and EU database registration for providers. This is the main compliance deadline for most organisations.
Two separate deadlines fall on 2 August 2027. First, Art. 6(1) high-risk systems that are safety components of products already subject to EU harmonisation legislation in Annex I (machinery, medical devices, railway, aviation, vehicles, toys, etc.) become subject to AI Act obligations — 36 months after entry into force. These products were already subject to sectoral conformity procedures; the AI Act adds an additional layer. Second, under Art. 111(3), GPAI model providers whose models were placed on the market before 2 August 2025 must take the necessary steps to comply with GPAI obligations (Arts. 53 and, where applicable, 55) by this date.
Two transitional regimes remain. Art. 111(1): AI system components embedded in Annex X large-scale EU IT systems (e.g. SIS, Eurodac, VIS) that were placed on the market before 2 August 2027 must comply with the AI Act by 31 December 2030. Art. 111(2): all other high-risk AI systems placed on the market or put into service before 2 August 2026 are only subject to the Regulation if they undergo significant changes in their design; however, providers and deployers of systems intended for use by public authorities must take the necessary steps to comply by 2 August 2030 regardless of design changes. Note: Art. 111(3) (GPAI legacy models) has an earlier deadline of 2 August 2027 — see that entry above.