The EU AI Act's structure, risk tiers, timeline, and penalties in one place — a reference for developers and small teams. Updated April 2026 with the Digital Omnibus trilogue state.
This is a reference for developers and small teams. It covers the AI Act's structure, risk tiers, timeline, and penalties in one place; the practical articles on this site link here for context. If you are looking for the deployer-specific operational detail as of April 2026, start with deployer obligations.
Regulation (EU) 2024/1689. Published 12 July 2024, entered into force 1 August 2024. 113 articles, 13 annexes, full application by 2 August 2027. The AI Act is horizontal legislation — it applies across all sectors and layers on top of existing regulation (GDPR, Medical Device Regulation, financial services directives) rather than replacing it.
Unacceptable risk — banned. Article 5 lists eight practices prohibited since 2 February 2025: subliminal manipulation, exploitation of vulnerabilities (age, disability, socio-economic), social scoring, predictive policing based on profiling alone, untargeted facial recognition scraping, emotion inference at work or school (with narrow medical and safety exceptions), biometric categorisation of sensitive attributes (race, religion, sex life), and real-time remote biometric identification in public spaces (law enforcement only, with narrow pre-authorised exceptions).
High-risk — permitted but heavily regulated. Two pathways qualify a system. Pathway 1 covers safety components of products already regulated under EU harmonisation legislation (Annex I) — medical devices, vehicles, machinery, toys, lifts, aviation equipment, and thirty-plus other directives. Pathway 2 covers standalone systems in eight Annex III domains: biometrics, critical infrastructure, education, employment, essential services (credit scoring, insurance pricing, benefits eligibility, emergency dispatch), law enforcement, migration and border control, and justice and democratic processes. Categories 3, 4, and 5 (education, employment, essential services) are the most likely to catch dev-team products. Article 6(3) offers an escape route for systems that only perform narrow procedural tasks — but the escape is blocked entirely if the system profiles people, with no exceptions.
Limited risk — transparency only. Applies to AI systems interacting directly with people (chatbots, voice assistants), systems generating deepfakes or synthetic content, and emotion-recognition or biometric-categorisation systems. You must inform users they are interacting with an AI and label AI-generated content. No conformity assessment, no registration.
Minimal risk — no specific obligations. Covers most AI: spam filters, entertainment recommendation engines, AI-powered search, inventory management, game AI. Voluntary codes of conduct are encouraged but nothing is mandatory under the Act itself.
The provider is the party that builds or commissions the AI system and places it on the market under its own name. Providers carry the heaviest obligations: quality management, technical documentation, conformity assessment, CE marking, post-market monitoring, and incident reporting.
The deployer is the party that uses the AI system in a professional context. Deployers must follow usage instructions, monitor operation, report risks, retain system-generated logs for at least six months when the system is high-risk, inform affected individuals, and conduct Fundamental Rights Impact Assessments where required under Article 27. Most small teams building on top of AI APIs fall into this category. See deployer obligations for the operational detail.
Requalification. A deployer becomes a provider if they put their own name on the system, make substantial modifications, or change the intended purpose in a way that pushes a non-high-risk system into a high-risk category. If you use the OpenAI or Anthropic API inside your product without rebranding the model, you are a deployer. If you fine-tune it, rebrand it, and ship it as your own system, the line is genuinely fuzzy — consult a lawyer.
Foundation models (GPT, Claude, Gemini, Llama, Mistral) are classified as GPAI models. Obligations have applied since 2 August 2025. Every GPAI provider must maintain technical documentation covering the training process, testing, and evaluation; provide that documentation to downstream providers; comply with EU copyright rules including respecting opt-out reservations; and publish a summary of the training content.
Open-source models are exempt from the documentation obligations unless they are classified as systemic risk. Systemic risk is presumed when training computation exceeds 10²⁵ FLOPs — at that threshold, additional obligations apply: adversarial testing, assessment and mitigation of systemic risks, serious-incident reporting, and cybersecurity measures for both the model and its infrastructure.
| Date | What applies |
|---|---|
| 2 Feb 2025 | Prohibited practices (Art. 5) + AI literacy (Art. 4) |
| 2 Aug 2025 | GPAI obligations + national authority designations + governance framework |
| 2 Aug 2026 | Transparency (Art. 50) + Annex III high-risk deployer obligations + sandboxes operational |
| 2 Dec 2027 | Annex III high-risk obligations (if Digital Omnibus passes as expected) |
| 2 Aug 2027 | Full application including Annex I high-risk (product safety) |
| 2 Aug 2028 | Annex I high-risk (if Digital Omnibus passes as expected) |
| Violation | Max fine | % of global annual turnover |
|---|---|---|
| Prohibited practices (Art. 5) | €35M | 7% |
| Deployer and GPAI obligations | €15M | 3% |
| Incorrect information to authorities | €7.5M | 1.5% |
For large companies the higher of the two thresholds applies. For SMEs and startups the lower threshold applies — a meaningful difference for smaller teams, though the obligations themselves are identical.
The Digital Omnibus is no longer a proposal. The Council of the EU adopted its negotiating mandate on 13 March 2026, and the European Parliament confirmed its position in plenary on 26 March 2026. Both institutions have converged on delaying Annex III high-risk obligations from 2 August 2026 to 2 December 2027 and Annex I from 2 August 2026 to 2 August 2028. Trilogue agreement is targeted for late April 2026 under the Cypriot presidency. The Art. 50 transparency obligations that apply to deployers — disclosing chatbot interactions, labelling deepfakes, marking AI-generated public-interest text — are not moving.
For the full trilogue state and what it means for a team planning a Q2 sprint, see deployer obligations. For the overlap with GDPR, see GDPR and the AI Act.
The April 2026 trilogue reshaped the deadline. What binds you regardless, what the Omnibus will probably move, and the deployer obligations most dev teams underestimate.
The 2026 state of the GDPR/AI Act interplay. What Joint Opinion 1/2026 and C-203/22 tell you about DPIAs, FRIAs, Article 22, Article 10 bias data, and fines.
Article 50 of the AI Act applies on 2 August 2026. C2PA for images and audio, SynthID-Text and the paraphrase gap, the Code of Practice second draft, and a Python starter.