The April 2026 trilogue reshaped the deadline. What binds you regardless, what the Omnibus will probably move, and the deployer obligations most dev teams underestimate.
On 26 March 2026, the European Parliament confirmed its negotiating mandate on the Digital Omnibus on AI in plenary. The Council had adopted its general approach thirteen days earlier. Trilogue negotiations are now open, with the Cypriot presidency pushing for political agreement on 28 April 2026 (Addleshaw Goddard briefing; European Parliament Legislative Train).
The important thing for a development team trying to plan a Q2 sprint: the two co-legislators have already converged on the most load-bearing change. Annex III high-risk deployer obligations, originally due on 2 August 2026, will almost certainly move to 2 December 2027. Annex I systems embedded in regulated products move to 2 August 2028. These dates are not in dispute between Parliament and Council. Trilogue is refining wording, not reopening the delay itself.
I think the high-risk delay is a near-certainty. That's a judgement call — not a guarantee — and the risk of betting on it is that if it slips past August 2026 without agreement, the original deadline snaps back. But given the convergence on dates, the bigger planning question is now a different one. What still binds you on 2 August 2026 regardless of the Omnibus?
Three tracks already apply or will apply on 2 August 2026 no matter what the trilogue produces.
AI literacy (Article 4) has been in force since 2 February 2025. Every provider and deployer must ensure staff dealing with AI systems has "a sufficient level of AI literacy." That includes contractors. There is no prescribed curriculum — the standard is calibrated to the person's role and the system's context. No direct administrative fine is attached to Article 4 yet, but inadequate training that contributes to harm becomes a civil liability question.
AI literacy is the cheapest item on the deployer list. A one-hour session walking your team through what the AI systems you use can and cannot do, their known failure modes, and what to escalate — plus a short document in the team wiki — satisfies the standard for a small product team. The surprise is how many teams have skipped it because "it isn't enforced yet."
Prohibited practices (Article 5) have also been in force since 2 February 2025. Social scoring, exploitative manipulation, predictive policing on profiling alone, and unjustified workplace emotion recognition are off the table. Most dev teams are not near this line, but the line is live and the fines ceiling is the highest in the Act.
Article 50 transparency disclosures for deployers still activate on 2 August 2026. You must tell users they are interacting with an AI when that is not obvious from context. If you generate deepfakes of real people, you must disclose it. If you publish AI-generated text about matters of public interest without human editorial review, you must label it. Parliament and Council agree these obligations are not moving.
What is moving is a narrow sub-provision: the Article 50(2) machine-readable marking obligation on providers. The Commission's Omnibus proposal delays it to 2 February 2027. Parliament shortened the delay to 2 November 2026. Council has not tabled a specific alternative date yet. I am not sure which side wins that particular wording fight — plan for the earlier date (2 November 2026) and you are covered either way.
Here is the practical split, as of early April 2026.
| Obligation | Currently | Likely after trilogue |
|---|---|---|
| AI literacy (Art. 4) | In force since 2 Feb 2025 | No change |
| Prohibited practices (Art. 5) | In force since 2 Feb 2025 | No change |
| Deployer transparency disclosures (Art. 50(1), (3), (4)) | 2 Aug 2026 | No change |
| Provider machine-readable marking (Art. 50(2)) | 2 Aug 2026 | 2 Nov 2026 (Parliament) or 2 Feb 2027 (Commission) |
| Annex III high-risk deployer obligations (Art. 26) | 2 Aug 2026 | 2 Dec 2027 |
| Annex I high-risk embedded systems | 2 Aug 2026 | 2 Aug 2028 |
| Fundamental Rights Impact Assessment (Art. 27) | 2 Aug 2026 | Follows Art. 26 → 2 Dec 2027 |
The practical read: the delay gives most dev teams sixteen extra months on Article 26 and the FRIA. It does not give you any extra time on literacy, prohibited practices, or the transparency disclosures that hit chatbots and AI-generated content.
A common read on the Omnibus is "we're not high-risk, so nothing applies until 2027." That is wrong twice over. Article 50 transparency catches chatbots and deepfake generation regardless of whether your system is classified high-risk. And Article 4 literacy has been in force for over a year — no risk class triggers it. If your product has a chatbot, you have work to do on 2 August 2026.
The AI Act defines a deployer as any person or organization using an AI system under its authority for non-personal purposes (Article 3(4)). If your product calls an AI API and presents the result to a user, you are the deployer. OpenAI, Anthropic, or Google is the provider.
The more interesting question is what happens when there is a cloud platform between you and the model. If you use Azure OpenAI, AWS Bedrock, or Google Vertex AI, you are still the deployer of the AI system as far as the end-user is concerned. The cloud layer is a sub-processor for data protection purposes, but it does not inherit your deployer obligations. You are on the hook for human oversight, monitoring, transparency, and literacy.
You become a provider — with the full heavy provider obligations — only if you (1) put your own name or trademark on the AI system, (2) make substantial modifications to how it works, or (3) change its intended purpose in a way that makes a non-high-risk system high-risk. Fine-tuning a foundation model on your own data is the grey area here. Whether adding a retrieval-augmented generation layer or a prompt injection mitigation counts as a "substantial modification" is genuinely fuzzy. I have not seen an enforcement action that draws a clean line on this, and neither has anyone in the cases I have read.
Article 26 lists twelve obligations for deployers of high-risk systems. If the Omnibus passes on schedule, they apply from 2 December 2027. Four of them are the ones small teams consistently underestimate.
Human oversight that actually works. Assigning a person to "supervise the AI" is not enough. The Act requires the overseer to have the competence, training, authority, and support to intervene. In practice that means: the supervisor can override or halt the system in production, has training specific to the system's known failure modes, and is not so overloaded that oversight becomes rubber-stamping. Most small teams have the authority condition covered and fail on training and bandwidth.
Log retention for at least six months. If your AI feature is high-risk, you must retain the system's generated logs for a duration appropriate to the purpose and no less than six months. The catch: "logs generated by the system" is broader than your application logs. For an LLM integration, it covers prompts, completions, guardrail hits, and any post-processing decisions. Building this into your infrastructure before August 2026 is a real engineering task, especially if your logging default today is "store for 14 days then drop."
Informing affected individuals. Before deploying a high-risk AI system that makes decisions about people, you must tell those people. "Our AI helps with hiring decisions" in your careers page footer is probably not enough. The standard is that the affected individual understands AI is involved in the decision about them. Employment, credit, and essential services deployments get this wrong most often.
The Fundamental Rights Impact Assessment (Article 27). This is separate from your GDPR DPIA. It applies specifically to public bodies, private entities providing public services, and deployers using AI for creditworthiness evaluation, credit scoring, or life/health insurance risk pricing. The FRIA must describe the processes the system feeds into, the groups it affects, the specific harms, and your oversight measures. It complements your DPIA — it does not replace it. The AI Office is expected to publish FRIA template guidance in the second half of 2026.
If your AI feature does any profiling of users in an Annex III domain — employment, credit, essential services, education — the [Article 6(3)](https://artificialintelligenceact.eu/article/6/) escape route is blocked entirely. No matter how "narrow" or "preparatory" the task is, profiling pushes the system to automatically high-risk. The trap is believing that "our AI just makes suggestions and a human clicks the button" gets you out of it. It doesn't, if the suggestion is driven by profiling.
The AI Act is an EU Regulation, which means Member States do not transpose it the way they transpose a Directive. But they do designate national competent authorities, national sanctions procedures, and — where the Act leaves room — their own supplementary rules.
Italy moved first. Law No. 132/2025 entered into force in October 2025 and added a new criminal offence for the unlawful dissemination of AI-generated or manipulated content that causes harm. Sentences run from one to five years. The Agency for Digital Italy and the National Cybersecurity Agency are the competent authorities. Italy also introduced criminal aggravating circumstances when AI is used to commit other offences.
Spain established the AESIA in September 2023 and presented its Anteproyecto de Ley on AI governance in March 2025, transposing the Act and establishing national sanctions and sandbox machinery.
Recent reporting has suggested that only a minority of the 27 Member States have their national designations and competent authorities fully in place for the original August 2026 application date. The Omnibus's high-risk delay partly reflects that uneven readiness.
The point for a dev team: "we'll just watch Brussels" is an incomplete strategy. If you are generating synthetic content that reaches Italian users and the content causes harm, Law 132/2025 is already live, independent of the Omnibus timeline. National divergence is how compliance risk will actually reach small teams first.
Three tiers apply.
| Violation | Max fine | Max % of global turnover |
|---|---|---|
| Using prohibited AI practices (Art. 5) | EUR 35 million | 7% |
| Failing deployer obligations (Art. 26) | EUR 15 million | 3% |
| Providing misleading information | EUR 7.5 million | 1% |
For large companies the higher of the two thresholds applies. For SMEs and startups the lower threshold applies, which is a material difference — a fifteen-person company facing a 3%-of-turnover fine is in a different universe from one facing EUR 15 million flat. SMEs also get priority access to regulatory sandboxes.
The Cypriot presidency is aiming for political agreement on 28 April 2026. Whether the text lands on 28 April or slips into May, the core obligations you can plan around now are the ones not in dispute. This is a four-week checklist for a dev team building on top of AI APIs.
The August 2026 deadline already moved — but not the one you thought. High-risk Article 26 obligations are going to December 2027. What actually activates on 2 August 2026 for most dev teams is narrower and more practical: transparency disclosures for chatbots and generated content, on top of the AI literacy obligation that has been binding you since February 2025. Start there. The high-risk work has sixteen extra months only if you are paying attention.
The trigger question is settled. The harder question is which assessment, and when. EDPB Opinion 28/2024, CNIL July 2025, and the Article 27(4) FRIA carry-over.
Five HR use cases for AI, each with the rule that applies, the 2024-2025 enforcement that shaped it, and the question to ask the vendor before you sign.
GDPR, the DSA, and the AI Act apply different rules to different recommender systems. The three stakes tiers, the case law that reshaped them in 2025, and what each tier actually has to do.