Articles cross-indexed by topic. Tags cut across the four clusters — the same tag can show up under AI Privacy, GDPR + AI, AI Act, or AI Security.
What goes into the model: scraped corpora, customer data, fine-tuning sets, and the rights that follow.
AI tools your team is already using without telling you — the procurement gap, the privacy gap, and how to surface them.
Patterns where personal data leaves your trust boundary through an AI feature, and what to do in the first 72 hours.
Privacy and compliance issues specific to ChatGPT and the OpenAI consumer surface — including the moments employees paste things they should not.
When customer data flows into an AI feature: the consent question, the contractual question, and the breach-notification question.
Data processing agreements with AI vendors: what to look for, what to push back on, and the clauses most teams miss.
Internal AI acceptable-use policies — the rules you write so your team knows what they can paste and where.
The cascade of vendors sitting under your AI provider, and the disclosure obligations that follow them down the stack.
Due diligence checklists for picking an AI provider — what to ask, what to compare, and what to refuse to sign.
Logging, observability, and audit trails for AI systems — without turning your monitoring into the next privacy incident.
Cross-border data flows after Schrems II and the 2023 Adequacy Decision — TIAs, SCCs, and where the AI providers actually sit.
Vector embeddings and the GDPR question of whether they qualify as personal data once you can re-identify someone from them.
Autonomous AI agents that read, write, and act on production data — and the access-control problems they create.
AI in HR, recruitment, monitoring, and performance — where employee data meets automated decision-making.
Data subject access requests against AI stacks — what you have to find, how to explain it, and how to meet the one-month clock.
GDPR Article 17 against trained models: vector stores, fine-tuned weights, and what 'erasure' actually means in an AI context.
When an AI feature triggers a Data Protection Impact Assessment, and what a defensible DPIA looks like in practice.
The 72-hour clock under GDPR Article 33 and how it interacts with breaches that happen at your AI provider, not at you.
Adversarial inputs that turn a helpful LLM into a confused deputy — and the data exfiltration paths they open.
Where your AI provider actually stores and processes data, and how that interacts with sovereignty and adequacy regimes.