Tribunale di Roma judgment 4153/2026 wiped out the €15M OpenAI fine on 18 March 2026. The reasoning isn't published yet. Here's what actually changes for your AI compliance work.
Three things happened in succession. Italy's Garante per la Protezione dei Dati Personali published Decision 755 on 20 December 2024, fining OpenAI €15 million for its handling of ChatGPT data. That was the only final GDPR enforcement action any European DPA ever adopted against a generative AI provider for the 2022–2023 launch period. On 21 March 2025, the Tribunale di Roma temporarily suspended the fine pending a merits ruling. On 18 March 2026, the same court issued judgment 4153/2026, annulling Decision 755 in full.
If you ship AI features in the EU, that ruling has almost certainly been cited to you by someone in product or legal as evidence that the GDPR questions around training data, transparency, and consent are "settled" in favour of providers. They are not. Below: what the court actually did, what the Garante had actually found, and what the reasoning gap means for teams trying to compliance-plan in the meantime.
The Tribunale di Roma issued judgment 4153/2026 on 18 March 2026. The court annulled Decision 755 in its entirety, setting aside both the €15M administrative fine and the six-month institutional communication campaign the Garante had ordered under Article 166(7) of the Italian Privacy Code. That campaign would have required OpenAI to buy radio, television, newspaper, and online ad inventory in Italy, informing users and non-users that their personal data was likely collected for model training. It was the first time the Garante exercised the Article 166(7) power since its introduction.
The court had temporarily suspended the fine in March 2025. What landed in March 2026 was the full merits ruling, 12 months later.
Two things the ruling did not include.
Published reasoning. The Garante received the ruling on the morning of 19 March 2026. As of mid-April 2026 the written reasoning has not been made public. The Garante has said it cannot even assess whether to lodge an appeal until it sees the reasoning.
Any finding on whether OpenAI's practices actually complied with the GDPR. The court could have agreed the Garante lacked competence, found proportionality or procedural defects, or disagreed on the substantive application of the GDPR to training data. You cannot tell which from the operative part alone.
OpenAI welcomed the decision in a statement to press: "We've always been committed to respecting user privacy and look forward to helping more Italian people, businesses and society benefit from AI." The Garante declined further comment.
Decision 755 was not a single-issue finding. The Garante alleged breaches of GDPR Articles 33, 5(1)(a), 5(2), 6, 12, 13, 24, and 25, plus Article 83(5)(e) on the penalty side. Five substantive points sat underneath the fine.
When you read "the €15M fine was annulled," that is what got annulled. Not a single finding. A bundle of five substantive GDPR questions plus a novel remedial instrument.
The written reasoning matters more than the operative paragraph, and it is not yet out. Any commentary that calls this ruling a vindication of OpenAI's position, or a signal of softening from European regulators generally, is getting ahead of the record. A court that annuls a DPA decision can be doing any of several distinct things.
Each has different implications for your compliance posture. A proportionality ruling tells you the Garante's analysis was sound but the punishment was wrong. A competence ruling tells you nothing about the GDPR merits, only about which DPA should have run the file. A substantive ruling on lawful basis or transparency would be the most consequential, because it would reshape how Articles 6, 13, and 14 apply to generative AI in EU law.
Teams that rewrite their lawful-basis analysis off the operative paragraph alone are speculating on what turns out to be a four-way fork.
Three GDPR questions that teams often bundle into "the ChatGPT case" were not resolved.
Lawful basis for training. The CNIL's final recommendations of 19 June 2025 and the EDPB's Opinion 28/2024 remain the most influential guidance on this question across EU DPAs — non-binding under Article 70(1)(e), but cited as the working position by the regulators most likely to open a file. Legitimate interest is generally available for AI training subject to a rigorous three-step test. A Rome ruling that annulled one fine does not displace an EDPB opinion or CNIL guidance.
Transparency under Articles 13–14. The CJEU has moved the other direction. In C-203/22 Dun & Bradstreet Austria, decided 27 February 2025, the court held that "meaningful information about the logic involved" is a real, enforceable obligation, and that providing a trade-secret algorithm or a marketing-brochure formula does not satisfy it. That case law was not before the Court of Rome.
Article 22 and automated decisions. C-634/21 SCHUFA (December 2023) and C-203/22 expand the set of outputs that trigger Article 22 protections. An annulled Italian fine from November 2024 has no effect on either CJEU ruling.
The practical shape of EU AI compliance in April 2026 still rests on: EDPB Opinion 28/2024 (17 December 2024), the CNIL 2024–2025 how-to sheets, the AI Act entering the Article 4 AI-literacy obligation on 2 February 2025 and the high-risk deployer obligations on 2 August 2026, and the two CJEU automated-decision rulings. None of that was touched.
OpenAI incorporated OpenAI Ireland Limited on 15 February 2024, during the Garante's investigation. That triggered two things.
First, under GDPR Article 56, the Irish Data Protection Commission became the lead supervisory authority for OpenAI's cross-border processing in the EU. Any continuing processing investigation (that is, not one that was already closed at the time of the move) should, in principle, have transferred to Dublin.
Second, the EDPB's ChatGPT Taskforce report included a footnote acknowledging this transfer issue for "continuing or continuous" processing. Not every national DPA was comfortable with what that meant for investigations they had already opened.
If the Court of Rome's reasoning turns on competence (that the Garante should have handed the file to the Irish DPC at the moment of the Ireland restructure) the ruling is narrow. It says nothing about the underlying GDPR questions. It says the Italian authority processed a file that no longer belonged on its docket. The Irish DPC has not, to date, opened a final enforcement action on the 2022–2023 launch period.
Realistically, the competence explanation is the most plausible single reading of the Rome ruling, but until the reasoning is out it is still a guess.
Very little, in practice. Three adjustments to make.
I think the most realistic read of the ruling, from what is public, is that it falls closer to a procedural or competence reversal than a substantive one. The case law here is genuinely fuzzy, and I will probably change my mind once the reasoning drops. An article that told you otherwise would be selling you a certainty that does not exist.
Pull out any document (DPIA, LIA, vendor-risk register, internal training deck) that cites "the €15M Garante fine" as authority or benchmark. Mark it as pending re-verification. Do not rewrite the analysis. When the Court of Rome publishes the written reasoning, you will have 30 minutes of work to reconcile your docs with what the court actually held. Until then, the compliance posture that was going to carry you through 2 August 2026 is still the one to ship with.
The five GDPR articles that actually decide whether your AI feature ships in 2026: legal basis, transparency after Dun & Bradstreet, Article 22, privacy by design, and DPIA.
The 2026 state of the GDPR/AI Act interplay. What Joint Opinion 1/2026 and C-203/22 tell you about DPIAs, FRIAs, Article 22, Article 10 bias data, and fines.
Section 702 sunsets April 20. The April 2026 state of EU-US AI transfers, what the DPF actually rests on, and the contract review you should do this week.
Free tool · live
AI Data Flow Checker
Map how personal data flows through your AI integrations and spot the privacy risks before they spot you.