Downstream incident runbook for the moment your AI vendor's breach email arrives. The 72-hour clock from your awareness, scoping with API logs only, and the three real 2025-2026 cases.
On 2 April 2026, Mercor confirmed it had been hit. The hacking group TeamPCP had inserted credential-harvesting code into LiteLLM, an open-source library used by thousands of companies to connect their applications to AI providers. By the time it was identified and pulled, Mercor's keys were among the compromised set. A few hours later, Lapsus$ claimed responsibility and started auctioning what they said was 4 TB of Mercor data on dark-web forums. Mercor's customers include OpenAI, Anthropic, and Meta. Their downstream customers include almost everyone using those models for anything sensitive.
If you were one of those downstream customers, your job over the following hours is the subject of this article. You are not running an investigation. You do not have admin access to the breached systems. You do not have packet captures or memory dumps. You have your own API logs, the email that just arrived, and whatever the vendor is willing to tell you over the next few days. The runbook for that situation is fundamentally different from the one for "we found a leak in our own systems".
Almost every breach-notification article says the GDPR clock starts when "you become aware". Almost none of them explain what that phrase means under EDPB Guidelines 9/2022, and almost every team misreads it the same way.
The EDPB's position, in the v2.0 update from 28 March 2023, is that "becoming aware" requires reasonable certainty that a personal data breach has occurred, affecting personal data you control. Three things follow.
First, being told that something might have happened is not awareness. The Guidelines distinguish between being informed of a potential breach and reaching the threshold of certainty. If your vendor sends you an alert that says "we are investigating an incident that may have affected some accounts", the clock has not started yet. What it has done is start an obligation to investigate. You must move quickly to determine whether you are in scope. You cannot sit on the alert as a delaying tactic, but you also cannot pretend the alert itself is the trigger.
Second, the clock does not wait for your full risk assessment. The EDPB is explicit that notification should not be postponed until the impact has been fully assessed. Risk assessment runs in parallel with notification. The 72 hours is for sending a notification with what you know now and supplementing later under Article 33(4), not for completing the investigation.
Third, the clock starts when reasonable certainty is achieved by a person within your organisation who is positioned to act on it. Reasonable certainty buried in a Slack thread that nobody on the security team has read does not stop the clock. The downstream consequence: have an inbox or a triage process that catches vendor notifications quickly. The corpus of CNIL and Garante decisions is full of teams whose vendor email sat unread in a generic abuse@ alias for four days.
The EDPB's "reasonable certainty" test in practice. The Guidelines 9/2022 framing, repeated in the v2.0 28 March 2023 update, is that a controller's awareness arises when the controller has "reasonable certainty" that a security incident has occurred which has led to personal data being compromised. The downstream version: when the vendor has confirmed in writing that personal data of the kind you sent has been affected, you are aware. When the vendor says "we are investigating, more details to follow", you are not yet aware but you must investigate. Document both moments separately. Both timestamps are evidence the supervisory authority will ask about.
Three actions in the first ten minutes, in order.
Save the email itself. Forward it to the incident-response distribution list with the original headers. Take screenshots of any portal alert. Note the exact time you became aware, in UTC, and write it down somewhere that is not the email thread itself. This timestamp anchors your entire response and is the first thing the supervisory authority will ask about if you ever need to explain the timeline.
Read the email properly, not for the headline but for the structured fields. A good vendor breach notification will tell you four things: what happened technically, what data categories were affected, which time window, and what the vendor has done so far. A bad one will say "we take security seriously" and "out of an abundance of caution". If the email is in the second category, your first reply is asking for the four fields explicitly. The phased-notification provision in Article 33(4) is how you respond to incomplete vendor notifications: you can file initial information with your supervisory authority and supplement later. Your vendor cannot use the same provision against you. Their DPA gives you the right to specifics, not platitudes.
Decide who else in your organisation needs to know in the first hour. Typically: incident-response lead, DPO or equivalent, the engineer who manages the integration with the breached vendor, legal. Do not yet involve PR, customer success, or the executive team. That comes after scoping, and pulling them in early creates noise that slows the technical work down.
This is the work that the runbook articles for first-party incidents never explain, because in a first-party incident you have your own logs, your own database, your own audit trail. In a vendor breach, you have only one half of the picture. The other half is at the vendor and may not be coming.
The four questions to answer in the first four hours:
What did you send to this vendor? Pull your API client logs for the affected window. For an LLM provider, that is prompts, system messages, file uploads, fine-tuning data, embedding inputs. For an analytics or observability sub-processor, that is event payloads. Be specific about categories of personal data. "Customer support tickets" is a category. So is "user IDs". So is "billing email addresses". Each is a separate row in your scope assessment.
When did you send it? Cross-reference the affected window the vendor named (if they did) against your own outbound logs. If the vendor said the breach window was 1–14 March, your scope is what you sent them on 1–14 March, not your entire history with them. If the vendor did not name a window, the scope is the entire life of your integration until containment.
Who did you send it about? This is the question that determines whether you have an Article 34 individual-notification obligation later. Map the user IDs in the affected window to specific data subjects. For a B2C product, that is end users. For B2B, it is the named contacts on customer accounts. The number matters for the "high risk to many people" axis of the Article 34 test.
What did the vendor's sub-processors do with it? This is the question that catches teams completely off-guard. The Mercor incident is a perfect example: you may not have known Mercor existed, you certainly did not list LiteLLM in your sub-processor register, and yet your prompts touched both. The OpenAI–Mixpanel incident in November 2025 was the same shape: customers who had bought OpenAI never thought of Mixpanel as a sub-processor, but it was, and it was the breached one. The sub-processor cascade article walks the entity-by-entity model.
The sub-processor you forgot to map is probably the one that was breached. Three of the largest AI-vendor incidents of the last twelve months were at sub-processors, not at the headline vendor: Mixpanel below OpenAI (November 2025), LiteLLM-via-Mercor below all three frontier providers (April 2026), Capybara/Mythos in Anthropic's own CMS rather than the model API (March 2026). Your scope assessment cannot rely on "we use OpenAI" as the boundary. It has to enumerate every entity in the chain you reach with your data. If your sub-processor register is a one-row spreadsheet with the model provider's name, the next breach will catch you flat-footed.
While scoping is still in flight, run the containment track in parallel. Four moves you can make without any further information from the vendor.
Rotate every credential you have shared with the breached vendor. API keys, OAuth tokens, webhook signing secrets, any service-account passwords. Do this even if the vendor's email implies credentials were not affected. Credentials are cheap to rotate and the cost of being wrong about the scope is high. Both the DeepSeek January 2025 incident and the OpenAI Redis bug from March 2023 included credential exposure that the initial notifications underplayed.
Scan your own outbound API logs for the affected window for anomalies. You are not looking for the breach itself, which is at the vendor. You are looking for whether the breached credentials were used against you in the time before you rotated them. Unusual destinations, off-hours traffic, sudden spikes, queries you do not recognise. This is one of the few things you can investigate from your side, and it sometimes turns up evidence of secondary intrusion that the vendor's investigation will never catch.
Decide whether to suspend data flows. If the breach is ongoing, if the vendor cannot confirm containment, or if the affected window is still open, stop sending data to the integration until the vendor confirms it is closed. This is operationally painful and you should pre-decide who in your organisation has authority to make the call. If the answer is "the CTO and they are on a flight", the call will be made too late.
Request deletion under your DPA. Most processor DPAs include a clause obliging the processor to delete or return data on request. If the vendor cached, logged, or stored your prompts or files, and that storage was inside the breached scope, exercise the deletion clause now. Do it in writing, with a timestamp, even if you suspect the vendor will not be able to confirm completion for days. The request itself is part of your audit trail.
This is where most of the breach-notification literature lives, and where I will be brief because the GDPR text and the EDPB Guidelines 9/2022 already cover it well. The runbook version:
GDPR has a two-tier system. Article 33 requires notification to your supervisory authority unless the breach is "unlikely to result in a risk to the rights and freedoms" of data subjects. The threshold is low. If personal data was exposed and there is any realistic chance of harm, notify. Article 34 requires direct notification to affected individuals when the breach is "likely to result in a high risk". This is a higher bar and the factors are well known: special category data, financial or biometric data, plaintext at rest, large numbers of subjects, identity-theft or fraud potential, children's data.
What the runbook articles do not say strongly enough: phased notification is not a sign of weakness. Article 33(4) explicitly allows you to notify with what you know within 72 hours and supplement later. Supervisory authorities prefer a partial notification on time over a complete one filed late. The Booking.com €475,000 fine was for notifying 22 days late, not for the substance of the notification itself. The Meta €91M decision in 2024 was largely about the failure to notify in any structured form during the 2018-2019 password-storage incident.
Start drafting the Article 33 notification at hour 4, not hour 60. The most common pattern in late-notification fines is teams waiting until they have a complete picture before opening the supervisory authority form. The EDPB is explicit that this is the wrong order. Open the form at hour 4 with what you know, work through the four required fields (nature, contact, consequences, measures), and file when you can answer enough of them to be useful, supplementing the rest under Article 33(4). Treating the form as a parallel workstream from the start removes the worst failure mode of the entire response.
If your business also falls under NIS2 (broadly, essential and important entities including digital infrastructure and ICT services), you have a parallel obligation under Article 23: a 24-hour early warning to your national CSIRT, a 72-hour incident notification, and a one-month final report. A single AI vendor breach can trigger both GDPR Article 33 and NIS2 Article 23 streams, on different clocks, to different recipients. The 24-hour NIS2 deadline is the tighter one and the one most teams discover too late.
The vendor's notification timeline is a separate compliance question with its own answer, and it is the part most teams forget to write down.
GDPR Article 28(3)(f) requires the processor to notify the controller "without undue delay" after becoming aware of a breach. The phrase is vague by design, but the EDPB Guidelines 9/2022 are clear that "without undue delay" must give the controller enough time to meet the controller's own 72-hour Article 33 obligation. In practice, anything beyond 48-72 hours from the processor's discovery is hard to defend.
The OpenAI-Mixpanel timeline is the one to keep on file. Mixpanel detected the breach on 9 November 2025. OpenAI notified its enterprise customers on 28 November 2025. That is a 19-day gap. Some of that gap is OpenAI's investigation of the second-order impact on its own systems, which is a legitimate internal step. Some of it is Mixpanel's notification to OpenAI in the first place, which is not visible from outside. From the controller's perspective at the bottom of that chain, the question is not whether the gap is forgivable but whether it is documentable. You document it, you flag it in your own Article 33 notification, and you record it as a finding against the vendor under your DPA.
A few months later, the same finding is the foundation for a vendor-replacement decision, a contract renegotiation, or in the worst case a formal complaint to the supervisory authority that triggers a separate investigation of the vendor's processor obligations. The Italian Garante's behaviour around the OpenAI Decision 755 (November 2024, partially annulled by the Court of Rome on 18 March 2026) suggests EU DPAs are willing to open their own investigations of upstream AI vendors when downstream complaints accumulate. Your record of the gap is the input to that.
The single most useful artefact this article points at is a vendor-breach playbook that names the people, the inboxes, the credentials, the suspension authority, and the supervisory authority form, before any incident happens. The Mercor case is what this kind of playbook is built for. Most teams that handled it well did not figure it out in real time. They had a playbook from 2024 that they updated after the Mixpanel incident in November 2025.
The four pre-incident decisions that change the response. Decide in writing, today, before any breach: (1) which inbox catches vendor notifications and who watches it, (2) who has authority to suspend a data flow without a meeting, (3) which credentials you would rotate first and how long that takes, and (4) which person in your organisation files the supervisory authority notification and where the form lives. If you cannot answer those four when you are calm, you will not answer them on the day a Lapsus$ ransom note shows up auctioning your prompts.
An operational guide for AI data leaks. GDPR Article 33 timing, containment, evidence preservation, notification templates, three worked incident walkthroughs, and the regulator differences that catch teams off guard.
A trace-walk of one OpenAI API call through every entity in the cascade, with the Article 28, CLOUD Act, Article 48, and DMA layers stacked on top.
Three real 2025-2026 vendor term changes (Anthropic's August 2025 consumer pivot, OpenAI's Mixpanel sub-processor removal, and Microsoft's January 2026 Anthropic addition) and the four-step playbook for when the notification email arrives.
Free tool · live
AI Data Flow Checker
Map how personal data flows through your AI integrations and spot the privacy risks before they spot you.