What changed for the three providers in 2025-2026: Anthropic's August 2025 consumer shift, the October 2025 Google TPU sub-processor expansion, the Court of Rome OpenAI annulment, and the Latombe DPF appeal pending at the CJEU.
The 2024 comparison article on this topic, repeated across a hundred sites, said roughly the same thing about all three providers. API data is not used for training, consumer terms are messier, all three offer DPAs, all three are SOC 2. That comparison is still true at the surface and misleading at the level of detail.
Five things shifted between January 2025 and April 2026.
Anthropic moved consumer plans to opt-out training. On 28 August 2025, Anthropic published an update to its Consumer Terms and Privacy Policy. The Free, Pro, and Max plans switched from "no training by default" to a choice the user must make actively. Users who did not opt out by 8 October 2025 are now contributing chats and Claude Code sessions to training, and their retention extended from 30 days to 5 years for the data covered by the choice. The change applies only to consumer plans. Claude for Work, Claude for Government, Claude for Education, the API, Anthropic on Amazon Bedrock, and Claude on Google Cloud Vertex AI keep the existing privacy posture. The commercial and consumer sides of Anthropic now look very different.
Anthropic became a multi-cloud sub-processor relationship. On 23 October 2025, Anthropic announced an expansion of its Google Cloud relationship to include up to one million Google Cloud TPUs alongside the existing AWS Trainium footprint. Anthropic now runs Claude across three chip platforms (AWS Trainium, Google TPU, and NVIDIA GPU) and the Google capacity alone is on a gigawatt scale, with a follow-on agreement in April 2026 tripling the TPU commitment. Your sub-processor authorisation under GDPR Article 28(2) needs to permit at least Amazon Web Services and Google Cloud on the Anthropic side. The older comparison articles that still list only AWS are out of date.
OpenAI published an updated sub-processor list in April 2025. The list still leans on Microsoft Azure as the principal compute sub-processor for the API and ChatGPT enterprise products, but the April 2025 update added named affiliates and review-flow sub-processors for content flagged by automated systems. If your DPA references an older version of the list, the controller-side authorisation may have lapsed under the general-authorisation flow.
The Court of Rome annulled the Italian Garante's €15 million OpenAI fine on 18 March 2026. The Garante's November 2024 decision had been the only finalised generative-AI GDPR enforcement action in Europe. The court annulled the fine and the order requiring OpenAI to run a public information campaign on AI training. This is a meaningful win for OpenAI, but it does not retroactively bless the practices the Garante objected to, and the Garante can still appeal. The Cross-Border Data Forum's read is that Europe's first generative-AI enforcement era ended with one fine, now annulled, and zero survivors.
The EU-US Data Privacy Framework survived its first court challenge. On 3 September 2025, the European General Court dismissed Philippe Latombe's challenge to the DPF in case T-553/23. Latombe filed an appeal to the CJEU in October 2025; that appeal is pending as of April 2026. The DPF is still good law for transfers to all three providers, and your TIA can rely on it, but the CJEU has invalidated two prior frameworks in Schrems I and Schrems II and the case is being treated by EU privacy lawyers as the third iteration of the same question.
Treating the Court of Rome ruling as "vindication" is the wrong read. The court annulled the Garante's specific decision on procedural and proportionality grounds. It did not endorse training generative models on scraped European personal data. The Garante can appeal to the Court of Cassation, and the CNIL, AEPD, and Hamburg DPAs all retain their own enforcement positions on AI training data. Your risk model should treat the Garante decision as suspended, not overruled.
| Anthropic (Claude API) | OpenAI (GPT API) | Google (Vertex AI / Gemini API) | |
|---|---|---|---|
| Training on API / commercial data | No. Confirmed unchanged after August 2025 consumer shift. Bedrock and Vertex AI Claude also excluded. | No. Standard since March 2023. | No (Vertex AI). Gemini API: free tier may; paid tier varies by version. |
| Consumer training default | Free / Pro / Max: opt-out (default ON) since 28 August 2025. 5-year retention if not opted out. | ChatGPT Free / Plus: opt-out via "Chat history & training" toggle. Team / Enterprise: excluded. | Consumer Gemini: data may be used. Vertex / Gemini Advanced: check terms by version. |
| Standard API retention | Not retained beyond request processing. Safety-flagged content: longer (period not published). | 30 days for abuse and misuse monitoring. Zero-retention available for qualifying customers. | Vertex AI: configurable; not retained beyond processing unless tuning / caching used. Gemini API: varies. |
| Sub-processors | AWS (Trainium / GPU), Google Cloud (TPU since Oct 2025), NVIDIA partners. Multi-cloud. | Microsoft Azure as principal compute. April 2025 list adds affiliates and review-flow sub-processors. | Google Cloud only (Vertex). |
| Native EU region inference | No. US infrastructure. | Direct API: no. Azure OpenAI: yes (Azure regional). | Yes. Vertex AI europe-west1 / europe-west4 / europe-west3 etc. Global endpoint does NOT count. |
| DPA / SCCs | Yes. SCCs in template. | Yes. SCCs Module 2 in template. | Yes. Cloud Data Processing Addendum, GDPR-stable since 2018. |
| EU-US DPF certified | Anthropic, PBC: certified. Status valid pending Latombe CJEU appeal. | OpenAI L.L.C.: certified. Same caveat. | Google LLC: certified. Same caveat. |
| Zero-retention API option | Available for enterprise / API. | Available for qualifying customers. | Configurable on Vertex AI by default. |
| IP indemnification (commercial) | Available in commercial terms. | Yes (API and ChatGPT Enterprise). | Yes (Vertex AI Generated Output Indemnification). |
| Last terms refresh by us | 2026-04-10 | 2026-04-10 | 2026-04-10 |
The table is a starting point. The five axes below are where the providers actually diverge, and where the differences matter for a vendor decision in the EU.
All three exclude API and commercial-tier data from training. The divergence is where they put the line between commercial and consumer.
Anthropic now has the sharpest divide. After 28 August 2025, the Consumer Terms cover Free, Pro, and Max with opt-out training and 5-year retention; the Commercial Terms cover Work, Government, Education, the API, Bedrock, and Vertex AI Claude with no training and the prior retention behaviour. The line is clean and the documentation is explicit. I think Anthropic's August 2025 shift is the most under-discussed change in the AI privacy space last year, because it inverted the default for the people most likely to paste sensitive content into a chat window.
OpenAI's divide is between the API (always excluded since March 2023) and ChatGPT consumer products (Free / Plus may be used unless the user toggles off, with 30-day retention for safety review even when off; Team / Enterprise excluded). The risk you actually carry is not the API contract. The risk is that your team uses the API correctly while another team member uses the ChatGPT consumer product with the same data. The API DPA does not cover that.
Google's divide is between Vertex AI (excluded, GCP terms apply) and consumer Gemini (data may be used). Gemini API outside Vertex sits in the middle and has changed across product iterations. Read the current version for the tier you use.
The published numbers are misleading without the qualifiers.
Anthropic's API does not retain inputs and outputs beyond processing. Safety-flagged content can be retained longer for review, and Anthropic has not published a maximum retention window for flagged content. That undocumented gap is the answer the procurement team should ask about, not the headline.
OpenAI retains API data for up to 30 days for abuse monitoring, deletes it after, and offers a zero-data-retention option for qualifying customers (typically enterprise tier or volume thresholds). 30 days is not negligible. If you process special-category data through the API, those 30 days are 30 days where OpenAI has copies of that data on its infrastructure. Most teams accept it. Health, legal, and financial cases often cannot.
Vertex AI gives you the most control: configurable retention, no retention beyond processing unless you opt into tuning or caching, and the Cloud Data Processing Addendum applies. Of the three, Vertex is the one where the retention answer is "what did you configure," not "what did the vendor publish."
This is the axis that moved most in the last 12 months and it is the axis the older comparison articles get most wrong.
Anthropic was a single-cloud relationship (AWS) until 2023, then expanded to GCP in 2024, then expanded materially on 23 October 2025 with the Google Cloud TPU agreement. Anthropic now runs Claude on Trainium, TPU, and NVIDIA across multiple regions, and the share split has moved over time. Your DPA needs to permit at least AWS and Google Cloud as Anthropic sub-processors, and your sub-processor change-notification subscription needs to be live so you do not miss the next addition.
OpenAI's principal sub-processor is Microsoft Azure for compute, with affiliates and review-flow sub-processors named in the April 2025 update. If your DPA references the November 2024 list or earlier, the general-authorisation clock has likely run on additions. Subscribe to OpenAI's notification list and re-paper the authorisation if a new entity has been added since you signed.
Google Vertex AI runs on Google Cloud and Google Cloud's sub-processors are documented in the Cloud DPA, which covers all GCP services in one place. The cascade is shorter here because Google is not depending on a third party for the GPU.
Of the three, only Vertex AI offers native EU regional inference for ML processing through europe-west1, europe-west3, europe-west4, and other regional endpoints. The Anthropic API is US-only. The OpenAI direct API is US-only. Azure OpenAI offers EU regional inference through Azure's global infrastructure but it is operationally a different relationship: you contract with Microsoft, the DPA is the Microsoft Online Services DPA, and OpenAI is the model provider behind the Microsoft service. Treat Azure OpenAI as a separate decision from OpenAI direct.
Vertex AI's global endpoint does NOT satisfy data residency. Google's documentation is explicit: the global endpoint does not guarantee processing in any specific region. If residency is the reason you picked Vertex, configure regional endpoints (europe-west1 / europe-west3 / europe-west4) explicitly and verify that your client SDK is hitting them. The default is the global endpoint in many SDK paths.
OpenAI offers Copyright Shield for ChatGPT Enterprise and the API. Google offers a Generated Output Indemnity for Vertex AI customers. Anthropic offers an indemnity in its Commercial Terms, narrower than Google's and less publicly described than OpenAI's. None of the three indemnities cover all use cases (typically excluded: deliberate prompts to elicit infringing output, certain jurisdictions, certain monetary caps), and the indemnity is usually capped at what you paid the provider in the prior 12 months. Read the cap before relying on it.
Strip out the brand names and the divergence on training, retention, and consumer terms is one pattern repeated three times.
All three providers pulled commercial data out of training during 2023-2024. All three then loosened their consumer terms during 2024-2025: OpenAI made the consumer training opt-out the default with the toggle, Google made consumer Gemini broad by default, and Anthropic flipped consumer Claude from no-training-by-default to opt-out-by-default in August 2025. The pattern is "commercial gets stricter, consumer gets looser." The divide between the two has widened, not narrowed, in the last 18 months.
The single most useful enforcement step for an EU team in April 2026 is the one with no procurement budget. Block the consumer URLs of the same provider you have on your enterprise contract. If you pay for the OpenAI API, block chatgpt.com at the proxy. If you pay for Vertex AI Claude, block claude.ai. If you have Vertex AI for Gemini, block gemini.google.com. This stops the worst version of the divergence: your team correctly using the commercial path while a colleague pastes the same data into the consumer URL.
What this means for the comparison: the choice between providers matters less than the choice between the commercial and the consumer path inside each provider. If your team is on the commercial path everywhere, all three are reasonable. If your team is mixing paths, the provider you picked is not the variable that matters.
For most use cases, all three providers are workable and the differences come down to model preference. There are three situations where the choice is forced rather than chosen.
Forced toward Vertex AI: hard EU residency for ML processing. If your regulator, your customer, or your contract requires that personal data not leave the EEA during inference, Vertex AI with regional endpoints is the only direct path of the three. Azure OpenAI is the alternative, but Azure OpenAI is a Microsoft contract with OpenAI behind it, not an OpenAI contract. If you want to keep the OpenAI relationship and residency together, you have to go through Microsoft. If you want both residency and direct contractual privity with the model provider, Vertex is it.
Forced toward Anthropic: simplest commercial terms with the cleanest training exclusion. Anthropic's commercial terms are the most concise of the three and the August 2025 consumer shift made the commercial line more explicit, not less. If your procurement and legal teams want a one-document answer to "is our data being trained on" that doesn't require reading three other product-specific addenda, Anthropic is the easier sell. The trade-off is no native EU residency, multi-cloud sub-processor cascade, and a younger compliance documentation footprint than Google.
Forced toward OpenAI: GPT-4 / GPT-5 family is the load-bearing model and Azure OpenAI handles your residency. If your application is built on the OpenAI model family and you need EU residency, Azure OpenAI is the stable path. The contract counterparty is Microsoft, the DPA is Microsoft's, and the Azure regional infrastructure is what underpins the data flow. This is operationally a different relationship from the OpenAI direct API and your TIA needs to reflect that.
I am not sure how the next 12 months play out for any of the three. The DPF Latombe appeal at the CJEU could pull the rug on US transfers regardless of provider. The Garante can appeal the Court of Rome annulment. Anthropic's consumer shift may go further in the next round. The honest answer to "which provider should we pick" in April 2026 is "the one whose commercial terms map cleanest to your residency posture and whose model fits your use case, knowing you may be re-papering at least one of them within the year."
You do not have to pick one. The teams I see picking single-vendor are usually doing it for procurement simplicity, not for a privacy reason. The privacy-driven decision is usually multi-provider: Vertex AI for the path that needs residency, OpenAI or Anthropic for general API use, and the enterprise tier of one of the consumer products for internal productivity.
The risk in the multi-provider pattern is the surface area of contracts. Three DPAs, three sub-processor lists, three notification subscriptions, three TIAs, three change-detection cycles. The benefit is that no single provider's term shift, court ruling, or sub-processor change can force a re-architecture of your application overnight. Multi-provider trades contract overhead for resilience to any single provider's bad month.
I think the multi-provider pattern is the right default for any team in the EU shipping production AI features in 2026, and I think the procurement team will hate that answer.
If you have a hard EU residency requirement, you start from Vertex AI regional endpoints and bolt on the others where residency does not apply. Otherwise, pick the model that fits your application, sign the commercial-tier DPA, block the matching consumer URL at the proxy, and re-verify the sub-processor list and DPF status quarterly until the Latombe CJEU appeal lands.
The vendor terms change faster than the typical procurement cycle. Two things to do before the next renewal lands on your desk.
First, re-paper sub-processor authorisations against the current published lists. Anthropic and OpenAI both updated their sub-processor lists in 2025; if your DPA references an older list, the general-authorisation flow may have lapsed and your controller-side authorisation may be out of date. Pull the current list, name the entities you authorise, and store the snapshot with the renewed contract.
Second, build a change-detection subscription. All three providers publish sub-processor and policy change notifications. Subscribe with a shared mailbox, route into your vendor management system, and treat any new sub-processor or material policy change as a 30-day re-review trigger. The honest review cadence for AI vendors in 2026 is quarterly, not annual.
If you only do one thing before the next renewal, do the consumer-URL block. It is the cheapest step here and the one most teams miss.
A clause-by-clause read of OpenAI's DPA in April 2026: what changed in the last 12 months, what still trips deployers, and the operational decisions that follow each clause.
A 2026 decision framework for dev teams choosing between self-hosting an open-weight LLM and calling a cloud API. Refreshed with Llama 4, the Latombe DPF challenge, and Azure / Bedrock EU data zones.
A trace-walk of one OpenAI API call through every entity in the cascade, with the Article 28, CLOUD Act, Article 48, and DMA layers stacked on top.
Free tool · live
AI Data Flow Checker
Map how personal data flows through your AI integrations and spot the privacy risks before they spot you.