A code-review walk through the seven things a senior reviewer should ask before an AI API integration ships, with the EU regulatory anchors that make each one load-bearing in 2026.
The integration is in code review. The PR adds a few hundred lines: an OpenAI client, two or three prompts, a small retry wrapper, a logging line, an entry in the privacy policy. The senior reviewer skims it, asks a handful of questions, and the PR either ships or it does not.
The questions the reviewer should ask are not the ones in most "AI API privacy" articles. The conventional list (legal basis, DPA, transparency, risk) is right but generic. The useful version is concrete enough to be answered by pointing at a specific file or settings page. If the answer is "let me check" or "I think we have one somewhere", the answer is no.
This piece is the seven-question version: seven things a reviewer should be able to see in the PR or in a linked artefact within sixty seconds. Each one is anchored in something that has shifted in EU AI privacy law in the last 12 months. The shape is "show me X" because that is what code review actually sounds like, and because "show me" is harder to fake than "have you considered".
The first thing the reviewer should see is not the prompt template. It is the populated request body. The actual JSON that goes over the wire on a real call, with real or realistic data. Print it. Read it. Count the personal data fields.
This is the question most teams skip because the prompt template looks innocent. A line like f"Summarise this support ticket: {ticket.body}" reads as a one-line abstraction, not as a payload that, in production, contains a customer's full name, two email addresses, a payment dispute summary, and a paragraph about a medical reason for the cancellation. The template hides what the call actually sends.
The four blind spots, in order of frequency:
OWASP's 2025 LLM Top 10 moved "Sensitive Information Disclosure" from #6 to #2, the largest single jump on the list. Read the OWASP entry: the vulnerability is in the application that constructs the prompt, not in the model. Most of the documented incidents are not model leaks. They are over-stuffed prompts.
Article 6 of the GDPR demands a lawful basis for every processing activity. Sending personal data to an AI API is a processing activity. The reviewer should be able to point at a one-paragraph statement of which basis applies and why.
In practice, three bases cover most AI API integrations:
Contract performance (Article 6(1)(b)). The user signed up for a product that includes the AI feature. The processing is necessary to deliver what the user signed up for. This works when the feature is core to the service and the user understood it was there at signup. It does not work for AI processing that goes beyond what the user expected, or for using their data to improve the model for other users.
Legitimate interest (Article 6(1)(f)). You have a clearly articulated business interest, the processing is necessary to that interest, and the balance against the user's rights comes out in your favour. This is the basis EDPB Opinion 28/2024 and CNIL's June 2025 recommendations both spend the most time on, because it is the basis most providers and most deployers want to use.
The catch with legitimate interest is the LIA. A Legitimate Interest Assessment is a document. It identifies the interest, demonstrates necessity, and runs the balancing test. The CNIL's 19 June 2025 recommendations on legitimate interest for AI are explicit: the LIA and the mitigation plan must exist before training (or, by extension, deployment) starts. Drafting it retroactively after a complaint is not compliance. It is paperwork.
Consent (Article 6(1)(a)). The user explicitly opts in. Useful for optional AI features or processing that goes beyond the core service. Consent must be freely given, specific, informed, and revocable. "By using this app you agree to AI processing" buried in the terms of service does not meet the standard. A clear toggle with a plain explanation does.
GDPR Article 28 requires a written contract whenever a third party processes personal data on your behalf. The reviewer should be able to see, in under thirty seconds, that the DPA exists and that it covers the specific product the integration calls.
The five things to verify, in the order they trip people up:
If any of these takes longer than thirty seconds to verify, the DPA layer of the integration is not in production-ready shape.
Articles 13 and 14 of the GDPR require the controller to inform users about how their data is processed. If the PR adds an AI API call, the privacy notice has to reflect the change. The reviewer should be able to point at the diff in the privacy policy and the in-product notice.
What the user has to be told:
The legal baseline is the privacy policy. The actually-useful disclosure is a short in-product line near the AI feature: "Responses are generated by an AI service. We do not train the model on your data." One sentence. It does more for trust than three paragraphs of legal text in a policy nobody reads.
The trap is retroactive disclosure. If the AI capability is being added to a feature that already existed, users who consented to the original processing did not consent to the AI-powered version. Article 13(3) requires re-informing the user when the purpose of processing changes. A change-log entry buried in the policy timestamp page does not satisfy the obligation. An in-app banner the next time the user opens the feature does.
CJEU C-203/22 (Dun & Bradstreet, 27 February 2025) widened the Article 22 framework substantially: a decision that "draws strongly" on automated logic falls within the scope even if a human formally signs off. If your AI feature surfaces a recommendation that humans almost always accept, you are in Article 22 territory and the privacy notice has to say so.
This is the question that catches the most teams in audit, because the leak is invisible from the AI feature itself.
Your application logs API requests for debugging. Those logs contain the populated prompt body, the same body that the data minimisation question above already covered. The retention policy on the logging system is now the retention policy on every personal data field in those prompts. If the AI integration is configured for Zero Data Retention at the OpenAI side and your Datadog or Sentry pipeline keeps the prompt for 90 days, the ZDR setting is theatre.
The places to check:
send_default_pii=True. This flag captures user identifiers and request bodies for non-AI debugging, and most teams flip it on without realising it also captures LLM prompts in the OpenAI integration unless OpenAIIntegration(include_prompts=False) is set.log.info(f"Sending prompt: {prompt}") line that someone added during local development and never removed.A user exercises Article 17. Their data is in your database. It is also in the populated prompts your application sent to the AI API two weeks ago, and in the observability logs that captured those prompts, and in the abuse-monitoring buffer at the AI provider. Show the reviewer the path that erases all of it.
For most API integrations, the pieces look like this:
The reviewer's check: open the runbook or the inline code and trace what happens when the user clicks "delete my account". Every storage location that the AI integration writes to should appear in that trace. If one is missing, the deletion is incomplete and the Article 17 obligation is not satisfied.
Article 35 of the GDPR requires a DPIA when processing is "likely to result in a high risk" to data subjects. AI features that score, evaluate, or make decisions about individuals trip this threshold. So do features that process special-category data or operate at scale. (See the DPIA practical check for the test.)
The reviewer's check is binary: either there is a DPIA, or there is a one-paragraph documented reason why this integration does not need one. Both are acceptable. "We didn't do one and didn't think about why" is not.
The AI Act layer: from 2 August 2026, deployers of high-risk AI systems under Annex III have to complete a Fundamental Rights Impact Assessment (FRIA) under Article 27, and the EDPS Joint Opinion 1/2026 has been clear that the DPIA and the FRIA "shall complement" each other. If your integration is a high-risk system, you need both, and the DPIA conversation is how the FRIA conversation starts.
Article 4 of the AI Act is also already in force (since 2 February 2025) and enforced from 2 August 2026: the deployer has to ensure "sufficient AI literacy" of staff using the system. The reviewer's question here is not about the integration code. It is about whether the rest of the team understands the integration well enough to operate it safely. (See the AI use policy guide for what "sufficient" looks like in practice.)
I think the framing that gets these questions answered is the one that makes them small. "Have you done a DPIA" is large and intimidating and easy to defer; "show me the DPIA decision in one paragraph" is small and answerable in sixty seconds. The same is true of the rest of the list. The questions are not bureaucratic. They are the integration moments where the law shows up in code.
Three things to do this week:
First, pick the next AI API integration that is in flight in your codebase and run the seven-question walk yourself. Time how long each answer takes. The ones that take more than sixty seconds are your backlog.
Second, add the seven questions to your PR template for any change touching an AI API. The discipline that gets them answered consistently is the same discipline that gets test coverage and security review consistently: make the question structural, not optional.
Third, calendar a quarterly read of the provider's sub-processor list with a named owner. The Mixpanel incident is the proof that the list moves and that the movement matters. The objection window only protects you if someone is watching it.
The five-question framing in the original version of this article was right. The seven-question code-review framing is the version that survives a real EU regulatory audit, because each item is concrete enough to be checked at a glance. That is the test the article is now built around.
A clause-by-clause read of OpenAI's DPA in April 2026: what changed in the last 12 months, what still trips deployers, and the operational decisions that follow each clause.
The trigger question is settled. The harder question is which assessment, and when. EDPB Opinion 28/2024, CNIL July 2025, and the Article 27(4) FRIA carry-over.
A practical, surface-by-surface audit recipe for finding personal data flowing to AI services. Covers prompt templates, observability defaults, embedding pipelines, and the limits of audit-by-grep in agent mode.
Free tool · live
AI Data Flow Checker
Map how personal data flows through your AI integrations and spot the privacy risks before they spot you.