A curated reference for developers: the GDPR and AI Act terms you will trip over when you ship an AI feature, one tight line each, grouped by role, legal concept, document, and AI Act specifics.
Regulation text is precise but dense. Developer documentation is clear but skips the legal context. This page is the middle — the terms you will trip over when you ship an AI feature, grouped by role, legal concept, document, and AI Act specifics. Use Ctrl-F.
Data controller (GDPR Art. 4(7)) — You decide why and how personal data gets used. If you build an AI feature on someone else's API, you are almost certainly the controller. GDPR obligations for transparency, legal basis, data subject rights, and DPIAs land on you, not on the API provider.
Data processor (GDPR Art. 4(8)) — Someone who handles personal data on your instructions. OpenAI, Anthropic, and Google are typically processors when you call their APIs. You stay responsible for what happens to the data.
Sub-processor — Your processor's processor. When OpenAI runs on Microsoft Azure, Azure is a sub-processor. When Anthropic expanded its use of Google Cloud TPUs in October 2025, Google became a more prominent sub-processor of Anthropic, and, transitively, of any team whose application runs on Claude. Your DPA should list sub-processors or commit to change notifications. See the AI sub-processor cascade.
Provider vs deployer (AI Act Art. 3(3) and 3(4)) — The provider built the AI system (OpenAI built GPT, Anthropic built Claude). The deployer uses someone else's AI system in a product or business. Calling an API and showing the output to your users makes you a deployer. Substantially modifying the system and re-branding it can push you into provider status, with heavier obligations. See deployer obligations.
Personal data (GDPR Art. 4(1)) — Any information relating to an identified or identifiable person. Names, emails, IP addresses, and device IDs are obvious. Less obvious: customer IDs in a database seed, developer emails in a git commit, vector embeddings derived from personal data, AI memory that persists across sessions.
Special category data (GDPR Art. 9) — Extra-sensitive personal data: health, biometrics, ethnicity, political opinions, religion, trade-union membership, sex life, and sexual orientation. Processing is prohibited except under specific conditions. AI Act Article 10(5) opens a narrow carve-out: you may process special category data strictly for bias detection in high-risk AI systems, with safeguards.
Legal basis (GDPR Art. 6) — One of six grounds that makes processing personal data lawful. The three that matter for AI features: legitimate interest (the most practical, but requires a documented three-step balancing test), consent (must be freely given, specific, informed, and revocable — hard to do well when AI training is involved), and contract performance (works when the AI functionality is the thing the customer signed up for). Document the basis before the processing starts.
Profiling (GDPR Art. 4(4)) — Automated processing that evaluates personal aspects: preferences, behavior, economic situation, work performance, location. Recommendation engines, persistent AI memory, and behavioral analytics all count. Under the AI Act, profiling in an Annex III domain (employment, credit, education, essential services) automatically pushes the system to high-risk with no exemption available.
Automated individual decision-making (GDPR Art. 22) — A decision based solely on automated processing that produces legal or similarly significant effects. The 2023 SCHUFA ruling (CJEU Case C-634/21) held that even an AI-generated score can qualify as an Article 22 decision when a third party draws heavily on it to decide a contract. "A human clicks the button" does not necessarily take you out of Article 22 — the question is how much weight the decision actually places on the AI output.
DPA — the two meanings — "DPA" is overloaded. Data Processing Agreement (GDPR Art. 28) is the contract between controller and processor specifying data scope, purpose, retention, sub-processors, and security; required whenever you send personal data to a third party that processes it for you. Data Protection Authority is the national regulator — the Italian Garante, French CNIL, Dutch AP, Irish DPC. Context makes clear which is meant. The Italian Garante's December 2024 €15 million fine against OpenAI was annulled by the Court of Rome on 18 March 2026, with the full reasoning not yet published as of April 2026.
DPIA — Data Protection Impact Assessment (GDPR Art. 35) — A structured document that describes your processing, assesses the risks to individuals' rights, and documents the mitigations. Required before processing begins when the risk is high. Most AI systems trip at least two of the EDPB's nine criteria — innovative technology, large-scale processing, automated decisions, evaluation or scoring — so a DPIA is the practical default. See do you need a DPIA for your AI feature.
FRIA — Fundamental Rights Impact Assessment (AI Act Art. 27) — The DPIA's AI Act sibling. Covers all fundamental rights, not just data protection. Required for public bodies, private entities providing public services, and deployers using AI for creditworthiness or insurance risk pricing. Article 27(4) says the FRIA complements an existing DPIA — do the DPIA first, then layer the FRIA on top.
SCCs — Standard Contractual Clauses — European Commission pre-approved contract templates for transferring personal data outside the EEA. Required when sending data to a country without an adequacy decision, or as a belt-and-suspenders backstop to the Data Privacy Framework. The 2021 Decision 2021/914 modules are the current text.
DPF — Data Privacy Framework — The successor to the invalidated Privacy Shield, in force since July 2023. US companies that self-certify under the DPF can receive EU personal data without needing SCCs. The framework was challenged by MEP Philippe Latombe at the EU General Court in 2025; the appeal is pending at the CJEU. Many teams belt-and-suspender with SCCs anyway. See the 2026 state of EU-US AI transfers.
High-risk AI system — An AI system classified under Annex III (employment, credit, education, essential services, biometrics, law enforcement, justice, migration) or as a safety component of a regulated product under Annex I. Subject to conformity assessments, technical documentation, human oversight, logging, and monitoring obligations. See the AI Act plain-English overview.
AI literacy (AI Act Art. 4) — Providers and deployers must ensure staff dealing with AI systems has "a sufficient level of AI literacy." No prescribed curriculum — the standard is calibrated to the role and the system. In force since 2 February 2025. The cheapest item on the deployer compliance list, and the one most teams have not yet addressed.
GPAI model — General-Purpose AI — Foundation models that can perform a wide range of tasks: GPT, Claude, Gemini, Llama, Mistral. Subject to transparency, documentation, and copyright obligations since 2 August 2025. Models trained above the 10²⁵ FLOPs threshold are classified as presenting systemic risk and attract additional obligations.
The five GDPR articles that actually decide whether your AI feature ships in 2026: legal basis, transparency after Dun & Bradstreet, Article 22, privacy by design, and DPIA.
The EU AI Act's structure, risk tiers, timeline, and penalties in one place — a reference for developers and small teams. Updated April 2026 with the Digital Omnibus trilogue state.
The 2026 state of the GDPR/AI Act interplay. What Joint Opinion 1/2026 and C-203/22 tell you about DPIAs, FRIAs, Article 22, Article 10 bias data, and fines.