The 2026 state of the GDPR/AI Act interplay. What Joint Opinion 1/2026 and C-203/22 tell you about DPIAs, FRIAs, Article 22, Article 10 bias data, and fines.
A single AI feature can owe duties under two EU regimes simultaneously, and the easy "we comply with GDPR, so the AI Act is covered" answer is the most common trap. One feature, two regimes, five concrete overlap points where the regimes actually touch. The joint EDPB/Commission guidelines that would spell out the interplay have been promised since MLex reported in late 2025 that they would be ready early next year, and as of April 2026 they have still not landed. The more authoritative document you already have is EDPB-EDPS Joint Opinion 1/2026 on the Digital Omnibus on AI, published 20 January 2026. It is the closest thing to an authoritative read on the interplay you will have before August 2.
This article walks the overlap by the places where the regimes actually touch, not by the regimes themselves. If you want the AI Act standalone picture, read EU AI Act: a plain-English overview for dev teams. If you want the deployer operational picture with the April 2026 trilogue news baked in, read EU AI Act deployer obligations by August 2026. What follows assumes you have decided to ship an AI feature that processes EU personal data and need to know where the two regimes touch in practice.
Joint Opinion 1/2026 is formally an opinion on the Digital Omnibus on AI proposal (the European Commission's simplification package for the AI Act). In substance it is the clearest recent statement from the EDPB and the EDPS about how they expect the GDPR and the AI Act to fit together, and every team shipping AI in the EU should read the section on Article 10.
Five positions from the Opinion that matter for the interplay.
Strict necessity for bias-detection training data. AI Act Article 10(5) already carves out a narrow right to process special category data (health, race, political opinion) strictly for bias detection and correction in high-risk systems. The Omnibus proposal would extend this to general-purpose AI models and other AI systems. The EDPB and EDPS support the extension but insist the "strict necessity" standard that currently applies to high-risk systems must apply to all providers and deployers using this carve-out. Translation: if your training pipeline touches special category data for fairness evaluation, the GDPR Article 9 prohibition has not disappeared. You get a narrow statutory exception on condition that you can document strict necessity.
Opposition to the high-risk timeline delay. The Omnibus would postpone Annex III high-risk obligations from 2 August 2026 to 2 December 2027. The EDPB and EDPS oppose the delay in plain language, calling it a threat to fundamental rights and legal certainty, and asking the Commission to keep the August 2026 date or minimise the slip. Our deployer article already said to ignore the Omnibus as a planning basis, and Joint Opinion 1/2026 confirms that the regulators with actual enforcement power agree.
Opposition to deleting the Annex III non-high-risk registration. The Omnibus would delete the obligation for providers to register Annex III systems they conclude are not high-risk under Article 6(3). The EDPB-EDPS Opinion firmly opposes this. Their argument: without the registration, market surveillance authorities and fundamental rights bodies cannot identify which systems are potentially in scope, and the "projected savings are marginal and do not justify the erosion of accountability." Plan to keep registering.
Mandatory DPA involvement in AI regulatory sandboxes. The Omnibus creates EU-level sandboxes for AI innovation but does not explicitly require DPA involvement where personal data is processed. The Opinion calls this a gap and demands mandatory DPA participation plus an advisory role for the EDPB. If you are thinking about joining a sandbox, assume the DPA will be at the table.
Opposition to weakening Article 4 AI literacy. The Omnibus would convert AI Act Article 4 (the AI literacy obligation in force since 2 February 2025) from a mandatory requirement into a soft "encouragement" mechanism. The Opinion rejects this. Article 4 stays mandatory, and our acceptable use policy article treats the literacy floor as binding for a reason.
Joint Opinion 1/2026 is not a binding guideline and does not replace the forthcoming joint EDPB/Commission guidelines on the GDPR/AI Act interplay. But it is the most recent, most detailed public statement from the two EU-level authorities that will be in the room when the joint guidelines are drafted. If you need to make an interplay decision before August 2026, Opinion 1/2026 is the best signal you have.
I think the most consequential line in Joint Opinion 1/2026 is the Article 10(5) strict-necessity position. It is the closest the regulators have come to telling providers exactly how to thread the GDPR/AI Act needle on training data: the narrow statutory carve-out is real, and it does not dissolve GDPR Article 9, and you need documentation. The rest of this article walks the overlap points, and every one of them should be read with the Opinion in the background.
GDPR Article 35 requires a Data Protection Impact Assessment when processing is "likely to result in a high risk" to individuals. AI features trigger it routinely: automated profiling at scale, innovative technology use, decisions with significant effects.
AI Act Article 27 requires a Fundamental Rights Impact Assessment for deployers of high-risk AI systems. The scope is narrower than it sounds: Article 27 attaches to public bodies, private entities providing public services, and deployers using AI for creditworthiness assessment or life/health insurance risk pricing. Most private sector deployers of Annex III systems are not automatically caught.
The two assessments are different animals:
| DPIA (GDPR Article 35) | FRIA (AI Act Article 27) | |
|---|---|---|
| Scope | Personal data processing risks | All fundamental rights (non-discrimination, dignity, workers' rights, child protection) |
| Trigger | High-risk processing of personal data | Deploying a high-risk AI system in the Article 27 categories |
| Conducted by | Data controller | AI system deployer |
| Covers | Necessity, proportionality, safeguards | Affected groups, specific risks, human oversight, mitigation |
Article 27(4) is the clause that matters: if you have already done a GDPR DPIA, the FRIA "shall complement" it. The AI Office has not published the official FRIA template that Article 27 calls for, and as of early 2026 companies are expected to develop their own based on Article 27's listed content.
Do the DPIA first. Then layer the FRIA on top of the same artefact. The FRIA inherits the DPIA's data-processing analysis and adds the fundamental-rights-beyond-data-protection layer: affected groups, discrimination risks, worker impact, child safety, oversight arrangements. Treating them as two parallel documents doubles the work and makes consistency impossible. Treating them as one layered document is what Article 27(4) actually says to do.
If your AI system is high-risk under the AI Act and processes personal data, you will typically need both. If your AI system is high-risk under the AI Act but does not fall in the narrow Article 27 deployer categories, you need the DPIA but not the FRIA.
Three transparency regimes apply to a typical AI feature. Each aims at a different audience with different content, and you need all three.
GDPR Articles 13-14 require the data subject to be told about their data: who collects it, why, the legal basis, retention, their rights, and whether automated decision-making is involved. This lives in your privacy notice.
AI Act Article 50 requires the user of the AI system to be told about the AI: that they are interacting with a chatbot, that a given image is a deepfake, that an article is AI-generated on a matter of public interest. Article 50 comes into force on 2 August 2026. This is user-facing but it is about the AI, not the data.
AI Act Article 13 (for high-risk systems only) requires the provider to give the deployer detailed information about the system: capabilities, limitations, performance metrics, intended purpose, instructions for use. This is B2B transparency, and it is what makes Article 27 FRIAs possible in the first place.
You can combine all three disclosures in one document, and most small teams should. But the content requirements differ and missing any of them is a separate infringement. A privacy notice that describes the personal data cleanly but does not say "you are talking to an AI" fails Article 50. A privacy notice that labels the AI cleanly but does not disclose the legal basis for the training data fails GDPR Article 13.
GDPR Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. When exceptions apply (contract necessity, explicit consent), you must provide human intervention, the right to express a view, and the right to contest the decision.
AI Act Article 14 requires high-risk AI systems to be designed for effective human oversight. Deployers assign competent personnel who can understand the system, monitor for anomalies, override or reverse outputs, and halt the system.
The critical distinction is this. Article 22 applies to any automated decision with significant effects, regardless of whether the system is classified as high-risk under the AI Act. A low-risk AI system can still trigger Article 22 if its output significantly affects people. And having human oversight features under Article 14 does not automatically mean the decision is not "solely automated" under Article 22. A human who rubber-stamps AI output without genuine review does not satisfy Article 22. The CJEU's SCHUFA decision (Case C-634/21, 7 December 2023) set this bar by ruling that a credit score generated by a third party could itself constitute an automated decision under Article 22 even if a human formally approved the downstream contract.
I think Article 22 is the higher bar here, and any team relying on Article 14 oversight alone is under-engineered. The practical test is not "does a human sign off on the output" but "does the human have the information, authority, and time to reach a different conclusion than the AI." If the answer is no, the decision is solely automated for Article 22 purposes even if an AI Act Article 14 oversight protocol is in place.
GDPR Article 5 says personal data must be adequate, relevant, and limited to what is necessary (data minimisation). You keep it accurate. You delete it when you no longer need it.
AI Act Article 10 says training, validation, and testing data must be "relevant, sufficiently representative, and to the best extent possible, free of errors and complete." It must reflect the geographic, contextual, and behavioural settings where the system will be used.
These pull in opposite directions when your training data is personal data. Minimisation says "collect only what you need." Representativeness says "your dataset must be complete enough to avoid bias." A dataset small enough to satisfy minimisation can be too narrow for Article 10. A dataset broad enough to satisfy Article 10 can push against minimisation.
The resolution is documentation plus the Article 10(5) carve-out. Show that your dataset size is justified by the representativeness requirement. Show that you have applied minimisation within that scope: no unnecessary data categories, no retention beyond what the model lifecycle requires, deletion of raw data once the model is trained. For the specific case of special category data used to detect and correct bias, Article 10(5) is the narrow statutory exception to GDPR Article 9 prohibitions. Joint Opinion 1/2026 insists the exception operates under a "strict necessity" standard, which is the highest bar GDPR knows. Assume regulators will ask you to prove that no less-invasive method would have worked.
GDPR Articles 13-15 give individuals the right to "meaningful information about the logic involved" in automated processing. For years the question was how much detail that meant in practice. On 27 February 2025 the CJEU answered in Case C-203/22, Dun & Bradstreet Austria. A customer ("CK") was denied a phone contract based on a D&B credit score. CK asked for the logic. The CJEU ruled that:
The ruling raised the practical bar: "explanation" now means something closer to "what factors drove this specific decision" and "what data variations would have changed the outcome," not "here is a white paper about the model."
AI Act Article 86 layers on top. Anyone affected by a high-risk AI system decision that produces legal effects gets the right to "clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken." Article 86(3) explicitly defers to GDPR where it already provides this right, so the practical added value of Article 86 is in multi-component scenarios: when several AI systems interact in a decision chain, Article 86 helps the individual understand which component contributed what.
If you are running a single AI feature, your Article 15 right-to-explanation response is what the CJEU set out in C-203/22, and that response also discharges Article 86. If you are running a chain (a retrieval system, plus a classifier, plus a generative summariser, plus a downstream ranking model), the Article 86 layer matters because the individual is entitled to understand how the stack produced the specific output that affected them.
A recurring claim in compliance decks is that GDPR fines (4% of worldwide turnover) and AI Act fines (7% for prohibited practices, 3% for most other infringements) stack to 11% of turnover. They do not stack for the same infringement.
AI Act Article 99(7) requires national authorities to consider whether a fine has already been imposed for the same infringement. For one factual violation, the higher applicable fine applies. The aggregate rises when different infringements from the same system can each be fined independently: a missing GDPR privacy notice is one infringement, a missing Article 50 AI disclosure is another, inadequate data governance under Article 10 is a third. Each of these is fineable on its own terms.
The practical risk profile is the interaction between the two. A single AI system can generate three distinct infringements on the same day: one under GDPR Articles 13-14 for the data transparency gap, one under AI Act Article 50 for the AI disclosure gap, one under AI Act Article 10 for the data governance gap. Article 99(7) does not aggregate these, and Joint Opinion 1/2026 does not fully resolve where "same infringement" stops and "different infringement" begins. The honest answer is that national enforcement practice will draw the line, and the first fines under both regimes will shape it. Until then, assume the worst-case interaction.
Open the one AI feature your team is most exposed on. Work the overlap points in this order.
First, the classification. Is the system prohibited, high-risk (Annex III), limited-risk (Article 50), or minimal-risk? High-risk answers unlock Articles 10, 13, 14, 27, and 86 at once. The Article 27 FRIA applies only if you also fall in the narrow deployer categories (public body, private entity providing public services, creditworthiness, insurance risk pricing).
Second, the legal basis. What GDPR basis are you using for the personal data in the prompts, the retrieval store, the training set, and the logs? Legitimate interest requires the three-step EDPB test from Opinion 28/2024. Consent runs into the Article 4(11) freely-given-specific problem if the consent is bundled with service use.
Third, the DPIA. Run it if Article 35 is triggered, which for AI features is almost always. If you also need a FRIA under Article 27, build it as a layer on the DPIA, not as a separate document.
Fourth, the transparency stack. Update the privacy notice for Articles 13-14. Add the Article 50 disclosure for the AI nature of the interaction. Confirm the Article 13 provider-to-deployer documentation is in your vendor file for any high-risk upstream components.
Fifth, the human oversight protocol. Name the humans. Give them time, authority, and the training to override. Document how they reach an independent conclusion. Do not rely on Article 14 scaffolding alone; aim for the GDPR Article 22 bar.
Sixth, the explanation pipeline. Build a response template for "meaningful information about the logic involved" that meets the C-203/22 standard: specific factors, plain language, comparative scenarios where possible.
Seventh, the data governance file. Document how your training and evaluation data satisfy Article 10 representativeness, and how you have applied GDPR minimisation within that scope. If you process special category data under the Article 10(5) carve-out, document the strict necessity under the standard Joint Opinion 1/2026 insists on.
The most common failure mode is not malice; it is the assumption that GDPR compliance covers the AI Act. It does not. A fully GDPR-compliant system can still fail Article 50 (if the user is not told they are interacting with AI), fail Article 10 (if the training data cannot be shown to be representative), or fail Article 14/Article 22 (if oversight is a rubber stamp). The overlap means the extra work is smaller than it looks, but the overlap also means a single gap can be two infringements simultaneously.
The AI Act did not replace GDPR; it built on it. The two regimes touch at DPIA/FRIA, three transparency regimes, human oversight, Article 10 versus data minimisation, and the right to explanation. Joint Opinion 1/2026 is the best recent read on how the authorities will resolve the tensions. The joint EDPB/Commission guidelines on the interplay are still expected but still not published. The binding date is 2 August 2026 for most high-risk deployers, and you should plan as if the Omnibus delay is not landing. The extra work is a layer on the DPIA you probably already owe, not a second parallel compliance programme.
The April 2026 trilogue reshaped the deadline. What binds you regardless, what the Omnibus will probably move, and the deployer obligations most dev teams underestimate.
The EU AI Act's structure, risk tiers, timeline, and penalties in one place — a reference for developers and small teams. Updated April 2026 with the Digital Omnibus trilogue state.
The trigger question is settled. The harder question is which assessment, and when. EDPB Opinion 28/2024, CNIL July 2025, and the Article 27(4) FRIA carry-over.
Free tool · live
AI Data Flow Checker
Map how personal data flows through your AI integrations and spot the privacy risks before they spot you.