A time-anchored runbook for handling the most common AI incident in 2026: a team member pasted personal data into a consumer-tier ChatGPT account. First hour through the policy that stops the next one.
The trigger looks the same every time. A support agent pastes a customer complaint into ChatGPT to draft a reply. Names, emails, order details, the lot. Or a developer drops a database query with production rows in to debug a problem. Or an HR manager hands a performance review to the model to "make it sound better." Different people, different roles, the same shape.
This is not a hypothetical. Samsung lifted its earlier ChatGPT ban on 11 March 2023 to keep engineers happy with the new tooling. Within twenty days, three separate engineers pasted semiconductor source code, equipment defect detection algorithms, and confidential meeting transcripts into the consumer chat window. Samsung re-banned the entire category on 2 May 2023 and the story went global. The LayerX 2025 corporate AI usage study, logged by the OECD AI Incidents registry as a category-defining event, found that 77% of employees who use generative AI tools at work paste sensitive company data through personal accounts. The Samsung headline is the visible tip of a continuous baseline.
This is a runbook for the moment you find out. Time anchors map to GDPR Article 33's 72-hour clock. The five phases below are sequential but the work inside each phase is parallelisable, so a small team can compress the early phases significantly when needed.
The Article 33 clock starts when you have a "reasonable degree of certainty that a security incident has occurred that has led to personal data being compromised." That is the EDPB's wording in Guidelines 9/2022. Being told "I think Maya pasted a customer's email into ChatGPT yesterday" is not yet certainty. It triggers an obligation to investigate, and the clock starts the moment that investigation hardens what happened into facts.
Talk to the person involved. No blame, just facts. You need answers to these questions, written down:
Some employees sign up for ChatGPT with their work email. That creates the impression of company authorization without any of the contractual protection of a Team or Enterprise seat. It also ties the consumer account to your domain, which means an OpenAI-side incident on those accounts shows up against your company in any future enforcement timeline. If you find work-email consumer accounts during the first-hour facts gathering, log them separately.
A personal data breach is "a breach of security leading to the accidental or unlawful destruction, alteration, unauthorised disclosure of, or access to, personal data" (Article 4(12)). Pasting personal data into a consumer AI tool without authorisation lands inside this definition almost every time. Three failures stack on top of each other: the data was disclosed to a third party without a legal basis, without a Data Processing Agreement, and without the data subject's knowledge.
Severity is what determines whether you have to notify. Use the EDPB nine-criteria framework as the structured weighing exercise:
I think the consumer-vs-business tier distinction is the single most misunderstood thing about ChatGPT inside companies that have not bought a Team plan, and it is the source of most of the leaks I have read about. The Samsung pattern (three leaks in twenty days, all on personal accounts) is the median, not the outlier.
If chat history was on and the conversation contained identifiable personal data, treat the data as already gone. Deleting the conversation now creates a paper trail under Article 5(2) accountability, but it does not retract any contribution to model training. Whether "training reversal" will ever be a real remedy for this category of incident is genuinely uncertain. Machine unlearning is an active research area, no production foundation model offers a per-user retraction primitive in 2026, and the Article 17 right to erasure does not currently extend reliably to model weights. Plan for irreversibility.
Article 33(1) requires notification to your supervisory authority "without undue delay and, where feasible, not later than 72 hours" after becoming aware. The exception is when the breach is "unlikely to result in a risk to the rights and freedoms of natural persons."
In practice, the threshold for "unlikely to result in a risk" is lower than most teams think. Notify if:
Skip notification if you can clearly defend a low-risk classification: one or two individuals, non-sensitive data, business-tier ChatGPT with training disabled by default, and you can demonstrate the data path stayed inside the no-train tier.
I would always file the initial 72-hour notification with whatever you have rather than wait for completeness. The EDPB has been explicit: late notification reads worse than incomplete notification, and a national DPA that finds out about a breach from a customer complaint instead of from the controller will treat that as an aggravating factor in any follow-up.
Article 33(4) explicitly permits phased notification. Submit an initial notification within 72 hours with the facts you have, and supplement it as the investigation completes. The EDPB Guidelines 9/2022 (v2.0, March 2023) treat phased notification as the expected pattern for any incident where investigation extends beyond the window.
Article 34 is a separate decision: notify the affected individuals if the breach is "likely to result in a high risk" to their rights and freedoms. The bar is higher than the supervisory authority threshold. Most consumer-tier ChatGPT incidents that trigger Article 33 notification do not also trigger Article 34, but if sensitive data about identifiable individuals was exposed to model training, treat that as Article 34 territory.
In parallel, file a deletion request through OpenAI's privacy portal (privacy@openai.com) with the approximate date, time, and account used. For incidents on Team, Business, or Enterprise, the workspace owner has direct control over conversation deletion and audit logging via the admin console. Document everything you ask for and what gets returned. The paper trail matters more than the outcome; it is what an Article 5(2) accountability defence looks like.
Containment for this category of incident is two things: removing the unsafe path so it cannot be used again, and making sure the team understands what changed.
The removal step:
The communication step is the part teams usually skip. Send a short note to the whole team, not just the person involved. It should say what happened (anonymised), why it was an incident, what the company is doing about it, and where the policy lives now. The point is to make the next person who is tempted to do the same thing pause and remember the message. Quiet remediation with no team-level communication trains everyone to assume nothing happened.
Document the incident in your breach register, even if you decided notification was not required. Article 33(5) requires you to keep this register for any breach, notified or not. It is the artefact a supervisory authority asks for first in any audit.
Then the two prevention layers:
The policy layer. Write a one-page AI acceptable use policy that lists the approved tools (Team / Business / Enterprise tiers, API access via the company workspace, internal LLMs), the data types that can never be pasted (customer PII, special category data, secrets, source code with credentials, anything covered by an NDA), and the consequences of violations. The acceptable use policy guide walks through the structure. Banning consumer ChatGPT without providing approved alternatives is what creates shadow AI and makes the next incident harder to discover.
The technical layer. Modern AI-native data loss prevention closes the gap legacy DLP misses. Nightfall's Spring 2025 launch added clipboard-paste interception for ChatGPT, Claude, Gemini, Perplexity, and Microsoft Copilot. Cyberhaven uses data lineage to track where a piece of content originated, so a Salesforce export that was downloaded, pasted into a Google Doc, then pasted into a chat window can be classified by provenance instead of pattern matching. Microsoft Purview now integrates directly with ChatGPT Enterprise for audit, eDiscovery, and Communication Compliance. Pick one. Any of them is an order of magnitude better than nothing, and the deciding factor is usually which platform your security team already operates.
The honest trade-off: technical controls catch the deliberate exfiltration and the careless paste. They do not catch the employee who screenshots a record, opens ChatGPT on their phone, and re-types the content. The policy is the only thing that addresses the screenshot-and-retype path, and it works only if people read it and believe the company will notice.
The Samsung pattern is the median, not the outlier. Treat any first-discovered incident as an indicator, not the population. The prevention layer that pays back the most is the one-page acceptable use policy plus a Team or Enterprise seat that gives people somewhere safe to paste. DLP closes the gap, but the policy is what changes behaviour. Document the incident even when you do not notify; the breach register is what an audit asks for first.
Six questions a regulator, a DPO, or an enterprise customer will ask you about AI and customer data. Grounded in 2025-2026 enforcement, CNIL guidance, and the Court of Rome OpenAI annulment.
A guide-tier walkthrough of writing an AI acceptable use policy that survives contact with reality. Includes the full template, the four sections that matter, the rollout playbook, and the EU AI Act Article 4 connection most teams miss.
Three tiers of shadow AI in 2026: the browser tab, the in-SaaS toggle, the OAuth-scoped agent. IBM puts the breach delta at $670K, Article 4 enforcement starts 2 August 2026, and a register beats a ban.