A trace-walk of one OpenAI API call through every entity in the cascade, with the Article 28, CLOUD Act, Article 48, and DMA layers stacked on top.
A user types into your application. Your backend constructs a prompt that includes the user's name and a support-ticket excerpt. Your code calls client.chat.completions.create(...). The SDK opens a TLS connection. A few hundred milliseconds later a response arrives, and the user sees a generated reply on their screen. From your point of view the call is one round trip.
That single round trip is actually a trace through at least four legal entities, three jurisdictions, two cross-border transfers, and a sub-processor cascade that has moved at least five times in the past eleven months. Your contract obligations apply to every waypoint in the trace, and the most useful way to understand them is to follow the call.
This piece walks one OpenAI API call as the protagonist. Each waypoint along the trace gets its own H2: who the entity is, what the controller-processor posture is at that point, and what the load-bearing legal question is. The mapping translates directly to Anthropic, Google, and any multi-cloud AI vendor, with the entities renamed but the structure intact.
If you want the contract layer specifically, read the OpenAI DPA breakdown. If you want the transfer-mechanism analysis, read the 2026 EU-US transfer state. What follows is the cascade-trace layer that connects them.
The trace begins inside your code. You are the controller. Whichever entity in your company holds the customer relationship is the controller. The processing purpose, the legal basis, and the data minimisation decisions are yours.
The decision points at this waypoint:
The waypoint posture: you are the controller, the next entity is your processor, and the rest of the cascade is sub-processor territory under Article 28. Everything that happens after this waypoint depends on the contract you signed with the next one.
The TLS connection terminates at OpenAI's API gateway. For an EEA-facing customer in 2026, the contracting entity is OpenAI Ireland Limited. This was not always true; OpenAI restructured its EEA operations in early 2024 to put the Irish entity in front of EEA customer relationships, and the data-exporter status sits with OpenAI Ireland in the current DPA.
The posture at this waypoint:
The load-bearing question at this waypoint is which version of the cascade applies to your call. With EU residency on, the trace stops at a European GPU and the next two waypoints disappear from the legal exposure analysis. With EU residency off, the call continues across the Atlantic.
If EU residency is off, the call now crosses the Atlantic. OpenAI Ireland is the data exporter; OpenAI Inc (and its operating affiliate OpenAI OpCo LLC) is the data importer. This is the SCC Module Two transfer in action.
The posture at this waypoint:
The load-bearing question here is the legal-entity question, not the data-location question. Microsoft pledged that EU AI user data, including Copilot interactions, would be stored and processed in the EU by the end of 2025. The pledge is welcome and partial. Storage location is not the same as legal reach. A US-incorporated entity remains subject to US legal process even when the data sits in an EU region.
The OpenAI inference workload runs on cloud infrastructure that OpenAI does not own end-to-end. Microsoft Azure is the primary cloud sub-processor for OpenAI's API platform, under a long-running commercial relationship that has been public since 2019 and is documented in OpenAI's sub-processor list.
The posture at this waypoint:
A trap many teams hit: the OpenAI API and the Azure OpenAI Service are two different products with two different contracts and two different cascades. Under the OpenAI API contract, OpenAI is your processor and Microsoft Azure appears in OpenAI's sub-processor list at this waypoint. Under the Azure OpenAI Service contract, Microsoft is your processor directly, OpenAI Inc is largely absent from the cascade (the model weights are licensed to Microsoft for the service), and the trace looks completely different. Read your invoice to know which cascade applies to your call.
Below the cloud sub-processor sits a layer most teams never read: the sub-sub-processors. A cloud provider does not always run every workload on infrastructure it owns. As of 2025-2026, three things changed at this layer that matter for an OpenAI cascade:
The posture at this waypoint:
The model generates a completion. The response travels back through the cascade in reverse: GPU → cloud sub-processor → OpenAI Inc or Ireland → your application. From a trace point of view this is symmetric to the outbound path, but two things happen on the return that the outbound did not.
The provider-side log entry. OpenAI's default 30-day abuse-monitoring buffer captures the call (input plus output) on the way back. For most deployments this is configured as a 30-day retention. For Zero Data Retention customers (including any OpenAI API deployment with EU data residency enabled) the buffer is empty. (See the OpenAI DPA piece for the residency-equals-ZDR mechanic.)
Your audit log entry. Your application probably logs the request and response for debugging, observability, or compliance. That log is its own waypoint in the trace and its own sub-processor relationship. If the log lives in Datadog, Datadog is a sub-processor. If the log lives in Sentry with send_default_pii=True, Sentry is a sub-processor and is also capturing the prompt body unless OpenAIIntegration(include_prompts=False) is set. If the log lives in Langfuse, LangSmith, Helicone, or Phoenix, each tool has masking and PII redaction features that are off by default in the open-source tier. (See the logging-and-monitoring piece for the architectural fix.)
The posture at this waypoint:
Five cascade movements between May 2025 and March 2026 illustrate why this is not a one-time configuration question:
| Date | Event | Cascade impact |
|---|---|---|
| April 2025 | OpenAI sub-processor list update | Routine refresh; the document version most current contracts reference |
| October 2025 | Anthropic-Google Cloud TPU expansion announcement | Anthropic cascade now spans AWS + Google Cloud; deployers without multi-cloud authorisation in their DPA had a contract gap |
| 8-28 November 2025 | Mixpanel incident (Mixpanel breach 8 Nov, OpenAI termination 27 Nov, public disclosure 28 Nov) | An analytics sub-processor that never saw chat content was still a forced sub-processor change; the objection-window mechanism was tested in production for the first time |
| 7 January 2026 | Microsoft activates Anthropic models inside Microsoft 365 Copilot | Default ON for most commercial cloud, OFF for EU/EFTA/UK; admins had a 30-day window from the 8 December 2025 visibility date to act |
| 26 March 2026 | Anthropic adds Palantir, AWS GovCloud, and GCP Vertex AI with FR-High Assured Workload to its sub-processor list | The first two carry ITAR-compliant scope; the Vertex AI entry does not. ITAR scope on Anthropic is now conditional on which sub-processor handled the call, which is not a position most deployer DPAs were drafted to anticipate |
Each of these required a customer-side response within a contractual window. The Mixpanel case is the proof that the response window is real and short. The Microsoft 365 Copilot case is the proof that the cascade can grow a new layer overnight when a vendor adds a new model partner. The 26 March 2026 Anthropic update is the proof that the cascade can also grow conditional regulatory scope inside a single vendor relationship: ITAR coverage now depends on which sub-processor was routed to.
Three legal layers are stacked on top of every waypoint in the cascade, and each one bites at a different point.
Article 28(2), the authorisation layer. The processor cannot engage another processor without prior specific or general written authorisation of the controller. Almost no AI vendor will sign specific authorisation; the practical reality is general authorisation with the right to object on changes. Without an objection right and without a process to use it, the authorisation is structurally broken. This bites at every waypoint where the cascade adds or replaces a sub-processor.
The CLOUD Act vs Article 48 conflict. The US CLOUD Act compels US providers to disclose data on demand from US authorities; GDPR Article 48 prohibits transfers or disclosures based on third-country judicial orders absent an international agreement. The conflict is unresolved as of April 2026 and there is no clean controller-side fix. The position regulators reject hardest is pretending the conflict does not exist. What survives audit is documented awareness, documented residual risk, and named mitigations (BYOK, data minimisation, EU-headquartered alternatives for the most sensitive workloads).
The DMA / Cloud and AI Development Act layer. On 18 November 2025, the EU Commission opened DMA market investigations into AWS and Microsoft cloud services. The proposed EU Cloud and AI Development Act is in the Commission's Q1 2026 work programme. Both signal that the regulator-side appetite for clarifying the cloud-cascade exposure is rising. Track them. The most likely 2026 outcome is new obligations on the cloud entities at waypoint 4, which propagate up the cascade to your contract surface.
| AI provider | Direct sub-processor | Role | Legal entity / country | CLOUD Act exposure | Authorised | Last reviewed |
|---|---|---|---|---|---|---|
| OpenAI | Microsoft Azure | Compute + storage (inference) | Microsoft Corp / US | Yes | General, [date] | YYYY-MM-DD |
| OpenAI | CoreWeave | GPU capacity | CoreWeave Inc / US | Yes | General, [date] | YYYY-MM-DD |
| OpenAI | AWS (post Nov 2025 partnership) | GPU capacity | Amazon Web Services Inc / US | Yes | General, [date] | YYYY-MM-DD |
| Anthropic | AWS | Trainium compute | Amazon Web Services Inc / US | Yes | General, [date] | YYYY-MM-DD |
| Anthropic | Google Cloud | TPU compute | Google LLC / US | Yes | General, [date] | YYYY-MM-DD |
| Anthropic | Palantir, AWS GovCloud, Vertex AI FR-High (added 26 Mar 2026) | ITAR / regulated workloads | Palantir Technologies / US; AWS Inc / US; Google LLC / US | Yes (US entities); ITAR scope conditional on which sub-processor handled the call | General, [date] | YYYY-MM-DD |
Three things to do this week, in order.
First, pick the AI vendor in your stack with the most active sub-processor cascade (almost certainly OpenAI or Anthropic) and trace one real call through every waypoint above. Write down the entity name at each waypoint. The exercise alone surfaces gaps in your existing register.
Second, confirm the change-notification feed for every active vendor is subscribed to a shared inbox a named owner monitors. The Mixpanel case is the proof that the notification window is short and that "let me check who is on the distribution list" is not a process. The objection window only protects you if someone is actively watching it.
Third, calendar the next two cascade events on the watch list: the EU Commission DMA cloud investigation outcome (sometime in 2026) and the EU Cloud and AI Development Act publication (Q1 2026 work programme). Both are upstream of your contract and both will trigger downstream changes.
Treat the trace as a quarterly review. The register is the artefact a regulator can ask for in an audit, and the only thing that turns "we use OpenAI" into a verifiable answer.
A clause-by-clause read of OpenAI's DPA in April 2026: what changed in the last 12 months, what still trips deployers, and the operational decisions that follow each clause.
Section 702 sunsets April 20. The April 2026 state of EU-US AI transfers, what the DPF actually rests on, and the contract review you should do this week.
What changed for the three providers in 2025-2026: Anthropic's August 2025 consumer shift, the October 2025 Google TPU sub-processor expansion, the Court of Rome OpenAI annulment, and the Latombe DPF appeal pending at the CJEU.
Free tool · live
AI Data Flow Checker
Map how personal data flows through your AI integrations and spot the privacy risks before they spot you.