The five GDPR articles that actually decide whether your AI feature ships in 2026: legal basis, transparency after Dun & Bradstreet, Article 22, privacy by design, and DPIA.
GDPR has 99 articles. When you ship an AI feature, five carry almost all the weight: Article 6 (legal basis), Articles 13–14 (transparency), Article 22 (automated decisions), Article 25 (privacy by design), and Article 35 (DPIA). The other 94 still apply. The fines cluster around these five.
Three things shifted between 2024 and 2026. The CJEU ruled in CK v Dun & Bradstreet Austria (C-203/22) that the algorithmic formula on its own does not satisfy the right to explanation. The CNIL's June 2025 final recommendations made legitimate interest the practical default for AI training. And the Court of Rome annulled the Italian Garante's €15 million fine against OpenAI on 18 March 2026, with the full reasoning still unpublished as of April 2026. Each move changed how one of the five articles below operates in practice.
This is a reference page. Each section explains one article as it lands on an AI feature in 2026, with the case law that has actually moved.
GDPR has six legal bases. Three matter for AI features. Most teams pick legitimate interest.
The CNIL finalised its position on 19 June 2025: legitimate interest is the appropriate basis for almost all AI training. The catch is the three-step test, and the CNIL has been explicit about what it expects to see documented.
robots.txt and explicit opt-outs, excluding sites that have objected to scraping, deleting irrelevant collected data on a defined schedule, and offering a prior opt-out mechanism are all enumerated as expected mitigations.I think legitimate interest is now the practical default for AI training in the EU, and the CNIL recommendations are the most useful operational guidance any EU regulator has put out. The Italian Garante's December 2024 €15 million fine against OpenAI was based partly on a missing legal basis for training. That fine was annulled by the Court of Rome on 18 March 2026; the full reasoning has not yet been published, so the precedential weight of the reversal is genuinely uncertain.
The Digital Omnibus, currently in trilogue with agreement targeted for late April 2026, would explicitly codify AI training as a recognised legitimate interest. It is not adopted yet. If it closes as expected, the CNIL framework above becomes the EU-wide template rather than a French one. See the deployer obligations article for the trilogue state.
Consent is poorly suited to AI training. It must be freely given, specific, informed, and revocable, and revocation from a trained model is technically near-impossible. For inference (the user prompts the system in real time), opt-in consent can work if it is genuine. Buried in terms of service, it is not.
Works when the AI is the thing the customer signed up for: an AI writing assistant, an AI code review tool, an AI meeting summariser. It does not cover "we used your data to train our next model for other customers." Different purpose, different basis.
When you collect personal data from someone (Article 13) or obtain it from another source (Article 14), you must tell them specific things. For automated processing, that includes "meaningful information about the logic involved." The CJEU narrowed what counts as meaningful in CK v Dun & Bradstreet Austria (C-203/22), decided 27 February 2025.
The question was: what does a credit reference agency owe a person whose mobile contract was refused based on an automated credit assessment? The court answered:
The boilerplate "we use machine learning to personalise your experience" notice no longer covers Article 13(2)(f) once your system makes any individual decision that matters to the person it touches. Dun & Bradstreet raised the floor. Most pre-2025 transparency notices need a rewrite, not a tweak.
For training notices specifically (Article 14, when the data was scraped or repurposed), the CNIL's web scraping focus sheet requires advance information about the purpose and the right to object. You do not have to disclose every implementation detail. You do have to say what you took, why, and how to opt out.
Article 22(1) gives individuals the right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects.
Two conditions, both required:
The 2023 SCHUFA ruling (C-634/21, 7 December 2023) expanded the reach. SCHUFA only calculated the credit score; the bank made the lending decision. The CJEU held that when the bank rejects "in almost all cases" where the score is poor, the scoring itself counts as the Article 22 decision. That dramatically widened the article's scope. Profiling and scoring relied on by downstream decision-makers now sit inside it.
I have stopped telling teams "a human reviews it, so Article 22 doesn't apply." After SCHUFA the right question is whether the human's decision actually changes in light of the AI score. If the human rubber-stamps the score in 95% of cases, you are inside Article 22, and the safeguards apply. The cheap fix: log the override rate. If it stays under a few percent, treat the system as Article 22 and ship the contest mechanism.
When the safeguards apply, you must offer:
The Italian Garante fined Foodinho €2.6 million in 2021 for an algorithmic rider-management system that excluded couriers from work without any avenue to contest. It was the first Article 22 enforcement in employment. It remains the cleanest example of what "no genuine human in the loop" looks like in practice.
Article 25 has two prongs. By design: technical and organisational measures must be embedded into the system from the architectural level, not patched in after launch. By default: only the personal data necessary for each specific purpose is processed, and the most privacy-protective settings are the defaults the user sees first.
For an AI pipeline this typically lives in four architectural choices:
A DPIA is required before processing begins when the processing is "likely to result in a high risk to the rights and freedoms of natural persons." The EDPB published nine criteria. Hitting two or more usually makes a DPIA mandatory.
Most AI features hit at least three: innovative technology, evaluation or scoring, large-scale processing. If your feature profiles users or makes decisions about them, four or five.
A DPIA must contain a systematic description of the processing and its purposes, an assessment of necessity and proportionality, an assessment of risks to data subjects' rights, and the measures planned to address those risks. The practical DPIA check walks through the decision in more detail.
The Dutch DPA fined Clearview AI €30.5 million in September 2024. Among the violations: no DPIA for large-scale processing of biometric data. The Clearview decision is now the reference fine for "shipped sensitive AI without an impact assessment."
Under the EU AI Act, high-risk systems also require a Fundamental Rights Impact Assessment. The FRIA complements the DPIA, it does not replace it. Public bodies, private entities providing public services, and deployers using AI for credit scoring or insurance pricing will need both.
| Use case | Art. 6 | Art. 13–14 | Art. 22 | Art. 25 | Art. 35 |
|---|---|---|---|---|---|
| Chatbot (info only) | LI or contract | Disclose AI, explain logic | Usually no | Minimise data | If at scale |
| Recommendation engine | LI (careful balancing) | Disclose personalisation logic | If significant effects | Opt-in defaults | Yes (profiling + scale) |
| RAG system | LI for corpus data | Inform about data in knowledge base | If outputs drive decisions | Pseudonymise corpus | If personal data at scale |
| AI agent (tool use) | LI per data category | Disclose agent capabilities | Likely yes (autonomous actions) | Least privilege access | Almost certainly |
| Credit / insurance scoring | LI or consent | Explain scoring logic (C-203/22) | Yes (SCHUFA) | Privacy-preserving scoring | Mandatory |
| HR / recruitment AI | LI (document carefully) | Explain AI involvement and criteria | Yes for rejected candidates | Bias testing, human review | Mandatory |
LI = legitimate interest.
If you only walk away with one thing: legitimate interest works for almost all AI training in 2026, but you owe the affected person a concrete reason whenever an automated decision matters to them. The Article 6 documentation the CNIL expects and the Article 13–14 explanation the CJEU now requires are the two pieces of paper most teams do not yet have.
The 2026 state of the GDPR/AI Act interplay. What Joint Opinion 1/2026 and C-203/22 tell you about DPIAs, FRIAs, Article 22, Article 10 bias data, and fines.
The trigger question is settled. The harder question is which assessment, and when. EDPB Opinion 28/2024, CNIL July 2025, and the Article 27(4) FRIA carry-over.
Vector embeddings of personal data are likely personal data under GDPR. Here is the legal test, the 2025 attack research, the regulator convergence, and how to document your position.
Free tool · live
Privacy Policy Generator for AI Features
Generate the AI-specific privacy policy clauses your existing template forgot to cover.