The trigger question is settled. The harder question is which assessment, and when. EDPB Opinion 28/2024, CNIL July 2025, and the Article 27(4) FRIA carry-over.
For an AI feature that processes personal data, the question is no longer whether a DPIA is required. It almost always is. The CNIL stated this plainly in its April 2024 recommendations: for AI systems involving personal data, conducting a DPIA is "in principle necessary." The Italian Garante and Denmark's Datatilsynet have arrived at the same place via enforcement priorities.
What changed is that the assessment itself has gotten harder, and the assessment you owe has multiplied.
Three things shifted between late 2024 and now. The EDPB published Opinion 28/2024 on 17 December 2024, which raised the bar for relying on legitimate interest in AI processing. The CNIL finalised its recommendations and added three practical sheets on 22 July 2025 covering training-data annotation, security during AI development, and the GDPR status of trained models. And the EU AI Act's Article 27 obligation to conduct a Fundamental Rights Impact Assessment for high-risk AI systems becomes enforceable on 2 August 2026, with a carry-over clause in Article 27(4): a DPIA that has already covered the same processing can carry over into the FRIA.
If you have an AI feature touching personal data, you are facing three documents that overlap and one deadline that does not move. The rest of this guide is about writing one assessment that satisfies all of them.
The EDPB's nine criteria from WP248 rev.01 are not new. What is new is that AI features tend to hit them by virtue of the architecture, not by accident.
The pattern repeats across customer-facing AI features. A recommendation engine evaluates and ranks people (criterion 1). It often produces outputs that affect what users see, can buy, or pay (criterion 2). It runs across the user base (criterion 5). And the model itself remains "innovative use of technology" in the regulator's vocabulary (criterion 8). Four criteria from a single feature; the threshold is two.
The features that escape this gravity are rare. A purely internal model that classifies log lines without touching user data is one. A sentiment model that runs once on already-anonymised review aggregates is another. If the model never sees a name, an account ID, an email address, or a free-text field that might contain those, it can sometimes stay under the line. Most features that actually ship do not.
A more useful exercise than walking the criteria one by one is to open the CNIL list and the ICO list of processing operations that always require a DPIA. The CNIL list includes any feature using biometric data, any feature that profiles or scores employees, any feature that recommends decisions about access to services, and any "innovative use" combined with another criterion. The ICO list has six AI-relevant lines. If your feature is named on either, the criteria walkthrough is moot. You owe a DPIA.
The CNIL went further in its July 2025 sheets. It said that an AI system classified as high-risk under the AI Act will, in principle, also require a GDPR DPIA. The two regimes converge here. If your feature plausibly fits any Annex III category (employment, education, access to essential services, biometrics, law enforcement, critical infrastructure), the DPIA is the floor, not the ceiling.
GDPR Article 35(7) lists four required elements for a DPIA. Description of processing, necessity-and-proportionality assessment, risk assessment, and mitigations. An AI DPIA fails most often on the third. The risk assessment is generic.
The CNIL's "Carrying out a DPIA if necessary" page is unusually specific about what an AI DPIA must address. Four risks the regulator expects to see named, that a generic GDPR template will skip.
Data confidentiality and training-data misuse. Membership-inference and model-inversion attacks against trained models are a documented risk class with published research. A model that has seen a person's data can leak that data through queries. A DPIA for a feature that fine-tunes on customer text, even at small scale, has to name this risk and say what is being done about it. Output filtering, differential privacy, no fine-tuning at all. Not naming it is the templated giveaway.
Automated discrimination. Not bias as an abstract concern, but the specific risk that the model's outputs disproportionately disadvantage a protected group in a measurable way. The DPIA has to say how this is being measured and what the response is when measurement shows a problem. "We ran the model through our standard test set" is not enough. The standard test set is usually not designed for it.
Synthetic content about real people. This is the AI-specific defamation risk and it is how the OpenAI complaint to the Norwegian DPA started in 2024. If the feature can generate text about individuals (biographies, summaries, support replies that name customers, draft emails), the DPIA has to name the risk that the output will be wrong in a way that materially affects the person. The mitigation is usually a combination of grounding, source citation, and output review for high-stakes flows.
Automation or confirmation bias. This is the CNIL-named risk that surprises teams the most. The risk lives in the reviewer who stops verifying the output because the model is usually right. A DPIA that names a "human review" mitigation without naming the override rate, the sample frequency, and the conditions under which the reviewer is allowed to disagree has not addressed it. CNIL framed it as "automated decision-making caused by automation or confirmation bias" in the 2024 recommendations and it has stayed in the list since.
Beyond the four CNIL-named risks, GDPR Article 32 pulls in the AI-specific security risks. Prompt injection, training-data poisoning, jailbreaks, model-supply-chain compromise. EDPB Opinion 28/2024 treats these as part of the security risk landscape the DPIA's risk assessment has to cover. So the practical floor is five risks: the four CNIL names, plus the Article 32 security risk class. A risk section that does not address all five is a DPIA a senior reviewer at a DPA will recognise as templated.
The DPIA's necessity-and-proportionality section is where legal basis lives. For most AI features that touch personal data, the legal basis on offer is legitimate interest under Article 6(1)(f). EDPB Opinion 28/2024 changed how that basis has to be defended.
The opinion confirmed a three-step balancing test that controllers have to walk through and document. CNIL operationalised the same test for AI specifically in its 19 June 2025 LIA recommendations. Both versions land at the same place.
I think the three-step test is the part most teams underestimate when they retrofit a DPIA to an existing AI feature. The old reading of legitimate interest leaned heavily on "we have a business reason and we updated the privacy notice." The post-Opinion-28 reading is closer to "you have to demonstrate the test, document the alternatives you considered, and show that the data subject's reasonable expectations were specifically weighed." The work is real.
The opinion also pulled the necessity prong toward data minimisation more aggressively than before. If you can do the same job with synthetic data, with anonymised data, or with a smaller training set, the necessity argument for the larger one weakens.
The EU AI Act adds a second assessment for deployers of high-risk AI systems: the Fundamental Rights Impact Assessment under Article 27. From 2 August 2026, public-sector deployers and private deployers in regulated areas (banking, insurance, employment, education, essential services) have to complete a FRIA before putting a high-risk system into use, and notify the national market surveillance authority of the results.
The FRIA is broader than a DPIA. A DPIA is bounded by data protection rights. A FRIA covers the full range of fundamental rights under the EU Charter, which includes non-discrimination, dignity, access to justice, freedom of expression, and the right to an effective remedy. In practice the scopes overlap heavily, because almost any high-risk AI system that affects individuals also processes personal data.
Article 27(4) makes that overlap operational. It says that where a DPIA has already been performed for the same processing under GDPR Article 35, the FRIA "shall complement" it. Recital 96 confirms the same point in plain text: the FRIA can be integrated into the DPIA where the DPIA already exists for the same processing. A deployer that has done the DPIA properly can extend the same document to cover the additional FRIA dimensions, rather than producing two parallel artifacts.
If you are scoping a DPIA today for an AI feature that might fall under Annex III, write the document so the additional FRIA sections can be appended. Add a short "fundamental rights" annex to the risk assessment that lists non-discrimination, dignity, access to justice, and effective remedy explicitly. The marginal cost during the DPIA is one section. The cost of bolting a separate FRIA on after August 2026 is starting over.
I am still not sure how strictly market surveillance authorities will read "shall complement" once enforcement starts. The most defensible reading is that the DPIA-plus-annex approach works as long as every fundamental right gets a documented assessment. The strictest reading is that the FRIA needs its own format and notification regardless. Until the first enforcement action lands, the safe move is to write the document so it would survive either reading.
Take a 30-person SaaS company shipping a customer support feature that uses GPT-4o to classify incoming tickets by topic and urgency, then route them to the right team. The model gets the ticket subject and body, including any customer name, email address, and free-text issue description. The classification is reviewed by a support lead before any auto-action. Tickets are stored in the help desk for two years. The model provider is on a zero-retention tier under the OpenAI Enterprise DPA.
1. Description of processing. Inputs: ticket subject and body. Personal data inside the body: customer name, email, account ID, sometimes free-text mentioning third parties. Processing: prompt sent via OpenAI API, classification returned. Output: category label plus priority score, reviewed by support lead. Retention: zero-retention at the provider; classification result stored two years in the help desk; raw ticket retained per existing support data retention policy.
2. Necessity and proportionality. Manual triage of 600 tickets per day across three time zones produces inconsistent prioritisation and adds 90 minutes of average response time on urgent issues. A smaller fine-tuned model was considered and rejected, because the training data would be the same customer text, which moves more risk into the company rather than less. Anonymising the ticket body before sending it to the model was considered and rejected, because the urgency signal lives in the free-text and is destroyed by aggressive redaction. The chosen design uses the smallest model that can do the job, in zero-retention mode, under a signed DPA. Legitimate-interest balancing: the legitimate interest is faster response to urgent customer issues; necessity is satisfied because alternatives were considered and tested; the data subject's reasonable expectations include receiving timely support and being told that AI assists triage, both of which are addressed.
3. Risk assessment, against the four CNIL-named risks plus Article 32 security:
| Risk | Likelihood | Severity | Mitigation |
|---|---|---|---|
| Training-data misuse / model extraction reveals customer text | Low (zero-retention tier, no fine-tuning) | High if it occurs | Contractual zero-retention; quarterly re-confirmation; no fine-tuning |
| Automated discrimination in priority assignment | Medium (free-text models inherit bias from training) | Medium | Monthly audit of priority distribution across customer segments; thresholds set in advance |
| Synthetic content about real people in replies | Out of scope (this feature classifies, does not generate replies) | Not applicable | Documented as out of scope; flagged for re-review if the feature grows to draft replies |
| Automation bias in the support-lead override step | Medium (the model is right ~92% of the time, which is exactly the threshold at which humans stop checking) | Medium | Override rate tracked weekly; reviewer required to confirm the priority before any auto-assignment to a high-urgency queue; quarterly sampling of low-confidence labels |
| Prompt injection from customer text (Article 32) | Medium (any free-text input is an attack surface) | Low (output is a label, not an action) | Strict output schema; refuse to act on labels outside the allowed set; no model output reaches the customer directly |
4. Mitigations to data subject rights: privacy notice updated to disclose AI involvement; access requests answered with the classification plus the model and provider; deletion handled at the help-desk record level (the provider does not retain).
DPO consultation: required, performed before launch. Residual risk after mitigation: low. Prior consultation with the supervisory authority under Article 36: not required.
That is a DPIA for this feature. It runs six focused pages including the risk table and it would survive a reviewer at a DPA, because it names the AI-specific risks (the four CNIL list plus the Article 32 security risk class), walks the legitimate-interest balancing test rather than asserting it, and does not pretend to mitigations that do not exist.
An AI DPIA that runs six focused pages and names CNIL's four risks plus the Article 32 security risk class is more defensible than a thirty-page generic template. Length is not the test. The test is whether the risks named match the architecture, whether automation bias gets a real override-rate mitigation, and whether the human review described in the document matches the human review that actually happens.
Supervisory authorities read DPIAs. The patterns that get flagged are predictable and almost always avoidable.
The "we have a legitimate interest" shortcut. Legitimate interest is the legal basis. A DPIA is about the risk to individuals. They answer different questions, and writing "the legal basis is legitimate interest" in the necessity section is not a substitute for the three-step balancing test. After EDPB Opinion 28/2024, the shortcut is much riskier than it used to be.
Describing the ideal, not the reality. Your DPIA should describe what actually happens, not the version of the process that lives in the runbook. If the human review step is "the support lead glances at the dashboard once a week," do not write it up as "rigorous human oversight." Reviewers at supervisory authorities have read enough DPIAs to recognise the gap, and overstating mitigations is worse than admitting them.
The "residual risk is low after mitigation" line gets challenged when the mitigation is described in the passive voice. "Output is reviewed before action" with no named role, no frequency, and no sample size is the version a DPA will pull on. "The support lead reviews 100% of tickets flagged urgent within four hours, with weekly sampling of non-urgent flags" gives the regulator something to verify. The first version reads like a wish. The second reads like a process.
SaaS features inherited from a provider. If you enable an AI feature in a third-party SaaS (Salesforce Einstein, HubSpot AI, Zendesk AI Agents), you are still the controller. The vendor's DPIA does not satisfy yours. Your assessment has to cover the data flowing from your customers through the vendor's feature, the vendor's sub-processors, and the residual risk for your specific use case. Vendor documentation is an input, not a substitute.
Forgetting to update. GDPR Article 35(11) requires the DPIA to be reviewed when the processing changes. Most teams write the DPIA once and shelve it. A model swap, a prompt change that adds new data fields, a sub-processor addition (Anthropic adding Google Cloud TPU to its sub-processor list in October 2025 was the wake-up call for many teams), a shift from cloud API to fine-tuned model. Each is a DPIA-update trigger. Set a review trigger that fires on any architectural change.
Pick the AI feature you are closest to shipping. Walk the EDPB criteria once; you will hit two or more, and that ends the trigger debate. Then sit down with a four-page template that has these sections in this order: description of processing, necessity and proportionality with the legitimate-interest three-step (EDPB Opinion 28/2024 + CNIL 19 June 2025 LIA recommendations), risk assessment that names CNIL's four — training-data misuse, automated discrimination, synthetic content about real people, automation bias — plus Article 32 security risks like prompt injection, and mitigations that name a real role with a real frequency.
If the feature could fall in any Annex III category, add a fundamental-rights annex while you are writing. The cost is one section now. The cost of writing it as a separate FRIA after August 2026 is several days.
If the feature is already in production without a DPIA, the work is the same. Do it, document the gap honestly, and treat the launch date as the date the assessment should have been done. A retrospective DPIA that admits the timeline is more defensible than no DPIA at all.
The 2026 state of the GDPR/AI Act interplay. What Joint Opinion 1/2026 and C-203/22 tell you about DPIAs, FRIAs, Article 22, Article 10 bias data, and fines.
The April 2026 trilogue reshaped the deadline. What binds you regardless, what the Omnibus will probably move, and the deployer obligations most dev teams underestimate.
Six questions a regulator, a DPO, or an enterprise customer will ask you about AI and customer data. Grounded in 2025-2026 enforcement, CNIL guidance, and the Court of Rome OpenAI annulment.
Free tool · live
AI Data Flow Checker
Map how personal data flows through your AI integrations and spot the privacy risks before they spot you.