A clause-by-clause read of OpenAI's DPA in April 2026: what changed in the last 12 months, what still trips deployers, and the operational decisions that follow each clause.
OpenAI's Data Processing Addendum is not the same document it was 18 months ago. The text has not changed dramatically. The structure and most of the clauses are recognisable from the February 2024 version that anyone who signed early would have on file. What has changed is the operational context around the document. There is a new sub-processor list update from April 2025. There is an EU data residency option that did not exist when most deployers signed. There has been a forced sub-processor change (Mixpanel, November 2025). And there is an annulled Italian fine and a still-pending DPF challenge that together set the legal frame in which any read of the DPA happens today.
This piece walks the clauses that matter for an actual deployment in April 2026, not as a legal exposition but as a deployment checklist. For each clause, the question is the same: what does this mean for the way I am sending data to OpenAI right now, and what is the operational decision it forces.
The first thing the DPA does not do is cover every OpenAI product. It covers specific ones, and the gaps between them are where most teams stumble.
The API DPA, the document under openai.com/policies/data-processing-addendum, applies to the OpenAI API, ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu. Those are the business products. They share a common Article 28 contract with OpenAI Ireland Limited as the EEA-facing data processor. Training on customer data is off by default. The 30-day retention window and the abuse-monitoring framework apply.
Outside that perimeter sits ChatGPT Free and ChatGPT Plus. Those are consumer products under consumer terms. There is no DPA. Training on prompts is on by default (the user has to opt out via Settings → Data Controls). And the data subject for those products is the individual user, not the company whose laptop the user is typing on.
The trap is the gap between those two regimes inside the same company. A team signs the API DPA, configures the business account, and assumes the company is covered for "OpenAI". Six months later a developer is logged into a personal ChatGPT Plus account on a work laptop and is pasting client work into it. The DPA does not reach that account. The DPA cannot reach that account. The legal basis question for that paste is governed by the consumer terms the developer accepted personally, and the controller posture sits with whoever did or did not authorise the paste under your AI use policy. (See the shadow AI piece for the operational fix.)
I think this is the single most common misreading of the OpenAI DPA in 2026: builders treat the document as a brand-level promise when it is a contract about specific endpoints, and the consumer ChatGPT URL is not one of them.
OpenAI's API DPA permits retention of API inputs and outputs for up to 30 days for trust and safety purposes (abuse monitoring, detection of policy violations). After that window, content is deleted. This is the clause people scan first and worry about most, and the worry is mostly proportionate. For a generic SaaS feature processing low-sensitivity prompts, 30 days is fine. It is a defensible retention period for a security purpose, and the downstream DPIA can document it.
Where the worry becomes load-bearing is health, legal, financial, and special-category data. There the question is whether any retention by a third party is acceptable, and the answer for a lot of teams is no. The published path to Zero Data Retention runs through OpenAI's data controls: eligible customers can apply for ZDR or for the lighter "Modified Abuse Monitoring", subject to OpenAI's prior approval and additional terms. Once approved, a Data Retention tab appears in Settings → Organization → Data controls and you configure at the project level.
That is the path most articles describe. There is an easier one that is less well known.
This is the change to OpenAI's commercial posture I think is most under-discussed. The Feb 2025 launch was a small announcement post and most builders saw it as an enterprise-tier residency feature, not as the de facto ZDR rail it actually is for the API. If you process EEA data and you have not turned this on, it is the first item on your next sprint.
Article 28(2) of the GDPR requires the processor to have authorisation from the controller before engaging sub-processors. The OpenAI DPA addresses this by maintaining a public sub-processor list and providing notification of changes, with an objection window for new entries.
The list is the load-bearing part of that clause, and most deployers have never read it. They have read "Microsoft Azure", noticed it as the primary inference sub-processor, and stopped. The November 2025 Mixpanel incident is what happens when you stop there.
The dates matter. On 8 November 2025 Mixpanel discovered unauthorised access to part of its systems. The next day, 9 November, the company became aware that an attacker had exported a dataset containing limited customer-identifiable and analytics information. The intrusion path was a smishing campaign that compromised an employee account. Mixpanel notified affected customers and shared the affected dataset with OpenAI on 25 November. OpenAI terminated its use of Mixpanel on 27 November and disclosed the incident publicly the next day.
What was in the Mixpanel dataset for OpenAI users: account holder name, email address, approximate location based on browser (city, state, country), operating system and browser, organisation or user IDs, and referring website. The affected scope was limited to the API platform (platform.openai.com) and to a subset of ChatGPT users who had submitted help-centre tickets or were logged into the platform site. What was not in scope: chat content, prompts, completions, API requests, passwords, API keys, payment details, government IDs.
There are two operational lessons. The first is that the sub-processor list is a deployment artefact, not a legal artefact. Read it the day you sign, then read it again every quarter. The second is that the objection window only matters if you have a process to act inside it. If a new sub-processor lands on the list and your organisation has no one who reads the notification, no one to escalate to, and no decision rule for "is this acceptable", the objection window is theoretical. Build the process before the next change.
The single most consequential change to the operational footprint of the OpenAI API in the last 18 months was not in the DPA text. It was in a product announcement.
On 5 February 2025, OpenAI launched data residency in Europe for ChatGPT Enterprise, ChatGPT Edu, and the API Platform. On 7 May 2025, the same offering rolled out to Asia (Japan, India, Singapore, South Korea). In October 2025, it expanded to the United Kingdom, the United States, Canada, Australia, and the United Arab Emirates. On 16 January 2026, OpenAI added in-region GPU inference for eligible ChatGPT Enterprise, ChatGPT Edu, and ChatGPT for Healthcare customers in the U.S. and Europe.
The reason this matters more than any clause-by-clause read of the DPA: the data transfer question for an EU customer using OpenAI changes shape entirely once residency is on. You are no longer arguing about the supplementary measures supporting an EEA-to-US transfer under SCCs. You are processing in-region. The Schrems II analysis becomes a much shorter document. Your TIA becomes a much shorter document. And, as noted above, ZDR for the API comes attached.
The catch (and there is one) is that this is a configuration choice the customer has to make. It is not the default. New ChatGPT Enterprise and Edu accounts can choose at-rest storage in Europe. API customers select eligible endpoints and project-level routing. If you signed your DPA in 2023 or early 2024 and have not gone back to the data controls page, your deployment is still routing transatlantically. The DPA permits both. Only one of them gives you in-region processing.
For deployments that are not on EU residency, the international transfer mechanism stack is what carries the weight.
OpenAI Ireland Limited is the data exporter for EEA and Swiss data. That has been the structure since OpenAI restructured its EEA operations in early 2024. When OpenAI Ireland transfers customer data outside the EEA to provide the services (most often to OpenAI's US affiliates and to sub-processors operating in the US), the transfer rides on the 2021 Standard Contractual Clauses, Module Two (controller to processor). On top of that, OpenAI is certified under the EU-US Data Privacy Framework, which provides an Article 45 adequacy basis for transfers to DPF-listed entities.
The belt-and-suspenders posture (DPF on top, SCCs underneath) is intentional. If the DPF survives every challenge, you do not need the SCCs. If the DPF goes down, the SCCs catch the fall and your day-one obligation is to update your TIA, not to renegotiate the underlying contract.
The challenge to watch is Latombe v. Commission. On 3 September 2025, the General Court of the EU dismissed the original challenge, brought by French MP Philippe Latombe, and upheld the DPF. In October 2025, Latombe filed an appeal to the CJEU. The appeal is pending and will take time. This is not a 2026 risk for most deployments. But it is the reason the SCCs sitting underneath the DPF in OpenAI's contract are not redundant. (For the longer transfer-stack analysis see Sending data to OpenAI, Anthropic, or Google?.)
I am not sure the CJEU will reverse the General Court. The General Court's reasoning was unusually strong and the appeal grounds are narrower than the trial-level arguments. But "I think it survives" is not a risk-management posture, and the SCCs in the OpenAI DPA are the reason that is fine.
The clauses at the back of the DPA (liability, indemnification, breach notification) get less attention than the training and retention clauses, and they are where most of the asymmetric risk lives.
Breach notification. The DPA commits OpenAI to notify the customer "without undue delay" after becoming aware of a personal data breach. There is no specific hour count. Under GDPR Article 33, the controller has 72 hours from awareness to notify the supervisory authority. If OpenAI takes 36 hours to send the notification, the controller has 36 hours left to draft and file. That is tight. And "tight" is the operational implication of "without undue delay" without a specific clock. The Mixpanel incident is a good calibration case: Mixpanel discovered the access on 8 November 2025; OpenAI received the affected dataset on 25 November; the public disclosure went out on 28 November. Twenty days from initial discovery to disclosure. For a sub-processor incident, that is fast. For your own controller-side notification clock, that timing would have failed Article 33.
Liability caps. Standard cloud-vendor pattern: liability is capped at fees paid in the preceding twelve months, with carve-outs for certain breaches. For most API customers paying inference fees in the four-figure range per month, that cap will not cover a real GDPR exposure event. This is not unique to OpenAI (Anthropic and Google Cloud have similar structures), but it is the reason your DPIA cannot lean on contractual indemnity to bring residual risk down.
What is silent. The DPA does not commit to a specific incident-notification clock. It does not commit to specific data-localisation outside the residency product. It does not include unlimited indemnification for processor-side breaches. It does not address the consumer ChatGPT product. And it does not pre-authorise every possible new sub-processor: the objection-window mechanism is the safety valve, and the safety valve only works if the customer has a process to use it.
A DPA is an Article 28 instrument. It binds a processor to handle the controller's data on instructions, with security, with sub-processor controls, with breach notification, with assistance for data subject requests. It does one job. It is not a substitute for the rest of the GDPR.
The Article 28 box is the easiest one to tick because OpenAI has done the work for you. There is a document, you sign it, the box is ticked. The boxes the DPA does not tick are the ones where most enforcement actually lands:
The DPA is necessary. It is a long way from sufficient.
Three things to do this quarter, in order.
First, log into your OpenAI organisation settings and verify that the DPA is actually executed for your account. A surprising number of teams assume it is automatic and discover during a regulator request that it was never countersigned. Two minutes of work; one administrative panic averted.
Second, on the same screen, check your data residency configuration for every project. If you are an EEA deployment that has not enabled European processing, that is the largest single privacy improvement available to you this quarter, and as noted, it brings ZDR for the API along for the ride.
Third, calendar a quarterly read of the sub-processor list with a named owner. The Mixpanel incident is the proof that the list moves and that the movement matters. The objection window only protects you if someone is watching it.
The OpenAI DPA is a workable contract. It has held its shape across a year of regulatory turbulence: a Court of Rome annulment, a General Court Latombe decision, a sub-processor breach, three data-residency expansions, and a continually shifting EDPB position on AI training. The clauses are doing the work the GDPR asks them to do. The part of the work the contract cannot do is yours, and it is the part most readers of the document forget.
What changed for the three providers in 2025-2026: Anthropic's August 2025 consumer shift, the October 2025 Google TPU sub-processor expansion, the Court of Rome OpenAI annulment, and the Latombe DPF appeal pending at the CJEU.
Six questions a regulator, a DPO, or an enterprise customer will ask you about AI and customer data. Grounded in 2025-2026 enforcement, CNIL guidance, and the Court of Rome OpenAI annulment.
Three real 2025-2026 vendor term changes (Anthropic's August 2025 consumer pivot, OpenAI's Mixpanel sub-processor removal, and Microsoft's January 2026 Anthropic addition) and the four-step playbook for when the notification email arrives.
Free tool · live
AI Data Flow Checker
Map how personal data flows through your AI integrations and spot the privacy risks before they spot you.