GDPR, the DSA, and the AI Act apply different rules to different recommender systems. The three stakes tiers, the case law that reshaped them in 2025, and what each tier actually has to do.
Three regulations apply to AI recommender systems in the EU: GDPR, the Digital Services Act, and the AI Act. They overlap in places and conflict in others, and the practical question for a team building a recommender is rarely "which one applies" but "which combination applies to this recommender". The honest answer sorts by stakes. Three tiers, each with its own regulatory regime, each triggered by what the recommender actually decides.
This article walks the three tiers, the case law that reshaped them in 2025, and the four checks worth running before launch.
GDPR Article 4(4) defines profiling as "any form of automated processing of personal data" that evaluates personal aspects (preferences, interests, behaviour, location, economic situation). If your recommender uses individual user data to personalise output, it is profiling. If it ranks the same way for everyone (popularity, editorial curation, content-only signals like genre or topic), it is not.
The distinction matters because every downstream rule walks through this gate. A "most popular this week" feed is not profiling and most of the rules in this article do not apply to it. A "recommended for you" feed driven by clicks, dwell time, and similar-user collaborative filtering is profiling and almost all of them do.
Almost every commercial recommender in 2026 is on the profiling side. The genuinely non-profiling ones are usually labelled "trending" or "editor's picks" and they exist as the regulatory escape hatch for the personalised feed sitting next to them.
This is the bulk of the corpus. Netflix-style content suggestions, Amazon-style product recommendations, Spotify-style music feeds, Pinterest-style image grids. The recommender profiles users and the output influences what they see, but the output does not produce a "legal effect" or "similarly significantly affect" them in the way GDPR Article 22 means.
For Tier 1, the load-bearing rule is DSA Article 27. Every "online platform" operating in the EU, not only the very large ones, must set out in plain language, in the terms and conditions, "the main parameters used in their recommender systems" and explain why the parameters matter and how a user can modify them. The disclosure must include the criteria most significant in determining what is suggested and the relative importance of each.
In practice, that means a paragraph in the terms of service that names the inputs (clicks, watch time, items added to cart, similar-user behaviour), explains what each one optimises for, and points to where the user can change the defaults. Most platforms still do this badly. The DSA Observatory's May 2025 review of major platforms argued that current implementations look closer to "transparency washing" than actual disclosure, and the European Commission has made it clear in opening statements on multiple investigations that "main parameters" means more than a list of bullet points without weights.
The parameter disclosure is paired with a user-control obligation. Where multiple recommender options exist, the user must be able to select and modify them at any time, and the control must be directly and easily accessible from the section where the content is presented. Burying the toggle in a settings page two levels deep is not "directly accessible".
Tier 1 also picks up GDPR's transparency obligations under Articles 13-14 and the right-to-object framework under Article 21. If your legal basis for the profiling is legitimate interest under Article 6(1)(f), every user can object and you must stop the profiling-based recommendations for that user unless you can demonstrate compelling legitimate grounds that override their interests. For most consumer-facing recommenders, the honest answer is that you cannot, and the right course is to honour the objection by serving non-personalised results.
The check that fits Tier 1: does the terms-of-service section on recommender systems name the actual parameters (not "various signals"), explain the relative weight of each, and link to a settings panel where the user can change them? If you cannot answer yes, the disclosure does not satisfy Article 27 as it is being read by the Commission in 2026.
This is the same recommender shape as Tier 1, but with three structural differences that move it into a higher regulatory band. First, it operates at platform scale (not "store with personalised suggestions" but "feed where the recommender is the product"). Second, it is optimised for engagement metrics (time on site, sessions per day, scroll depth) rather than "did the user find what they were looking for". Third, the optimisation has knock-on effects on user wellbeing or the spread of content with systemic significance.
For Tier 2, three rules layer on top of the Tier 1 baseline.
DSA Article 25 (dark patterns). Online platforms must not design their interfaces in a way that deceives or manipulates users or otherwise materially distorts their ability to make free and informed decisions. The Commission's December 2025 €120M fine against X cited Article 25(1) in the context of the blue-checkmark verification design. The regulator's read is that visual signals that deceive users about the trustworthiness of an account are dark patterns even when no recommender is involved. Recommenders that exploit psychological vulnerabilities (infinite scroll combined with attention-maximising ranking) sit in the same line of fire.
DSA Articles 34-35 (systemic-risk assessment for VLOPs). If your platform is above the 45M EU monthly active users threshold, you must conduct an annual systemic-risk assessment that specifically considers the design of your recommender system, including risks to "the protection of public health and minors and serious negative consequences to the person's physical and mental well-being". The Commission's 6 February 2026 preliminary findings against TikTok argued that the combination of infinite scroll, autoplay, push notifications, and the personalised recommender system creates exactly this kind of systemic risk and that TikTok had not adequately assessed it. The proposed remedies included disabling infinite scroll, implementing screen-time breaks, and adapting the recommender. The exposure if the preliminary findings become a non-compliance decision is up to 6 percent of worldwide annual turnover.
AI Act Article 50 transparency for AI interaction. Where the recommender output is generated by a model that interacts with users, deployers must inform users that they are interacting with AI unless it is obvious from context. Most engagement feeds will not trigger this on their own, but adjacent features (AI summaries of recommended content, AI-driven prompts, AI chat overlays) usually do.
The check that fits Tier 2: is your engagement-optimisation function explicit and assessed against well-being risks, or is it the implicit consequence of optimising for "session length" without a written-down tradeoff? If your team cannot show a documented decision about what the recommender is optimising for and what risks that creates, you do not have an assessment a regulator will accept.
This is the tier where Article 22 lives and where the case law shifted in 2025.
GDPR Article 22 restricts decisions "based solely on automated processing, including profiling" that produce "legal effects" or "similarly significantly affect" the data subject. For a long time the operative reading was narrow: the recommender had to make the final decision, with no human in the loop, before Article 22 applied. A human reviewer at the end of the chain was the standard work-around.
The CJEU's 7 December 2023 ruling in C-634/21 SCHUFA was the first crack in that reading. The Court held that a credit-scoring activity by a credit reference agency could itself constitute automated decision-making under Article 22 when the score significantly influences a downstream decision by the contracting party. The 27 February 2025 judgment in C-203/22 Dun & Bradstreet Austria walked the rest of the way: a scoring system that "decisively influences" the human decision is within Article 22 even when the human nominally makes the call. A mobile-phone contract was denied to "CK" based on a Dun & Bradstreet creditworthiness score, the human at the operator nominally made the rejection, but the Court read the scoring activity itself as the decision under Article 22 because the human had no real space to depart from it.
Dun & Bradstreet did two further things that matter for any Tier 3 recommender. First, it said that the right to "meaningful information about the logic involved" under Article 15(1)(h) requires the controller to explain the procedure and principles actually applied. The data subject has to be able to understand which personal data were used and how they shaped the result, but the controller does not have to publish the full algorithm or the mathematical formulas. Bare algorithmic disclosure is not intelligible enough; raw weights and parameters are not what the Article requires. Second, it rejected the blanket trade-secret exemption: a controller claiming trade secrets must disclose the protected information to the supervisory authority or court for an independent balancing exercise, not refuse the request outright.
What this means in practice for a recommender system that determines access to credit, employment, insurance, housing, or any essential service: assume Article 22 applies. Build the human-in-the-loop with genuine room to depart from the recommendation (a rubber-stamp reviewer is not "meaningful human involvement" under either SCHUFA or Dun & Bradstreet). Design the explanation so it covers the procedure and principles, not the weights. Document the override rate as the key audit metric. If the human reviewer ratifies the recommendation more than ~95 percent of the time, the recommendation has effectively become the decision and Article 22 is in play.
The AI Act layer sits on top. Annex III lists eight categories of high-risk use, including "AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score" (Annex III §5(b)), employment decisions (§4), access to essential public and private services (§5(a)), and education and vocational training (§3). Article 6 sets the classification rule. Article 6(3) provides a "narrow procedural task / preparatory task / etc." derogation that lets some Annex III systems escape high-risk classification if they do not pose significant risk. But the same Article 6(3) contains an override clause: an AI system referred to in Annex III is always considered high-risk where it performs profiling of natural persons. The derogation does not apply.
For a recommender system that profiles users and operates in any Annex III domain, this means there is no escape from high-risk classification. The full set of high-risk obligations (conformity assessment before deployment, technical documentation, data quality and governance, human oversight, post-market monitoring, registration in the EU database) takes effect on 2 August 2026. The deadline is now.
Tier 3 also picks up the FRIA (Fundamental Rights Impact Assessment) carry-over under Article 27 for deployers in scope (public bodies and certain private entities). The FRIA must be in place before first use of the high-risk system and must "complement" any DPIA you have already done. In practice, the FRIA and the DPIA share most of their inputs and are best built together; treating them as separate documents is the failure mode.
The check that fits Tier 3: measure the override rate (how often the human reviewer rejects or modifies the recommendation), document the explanation that satisfies Dun & Bradstreet, and confirm that your Annex III classification accounts for the profiling override in Article 6(3). If any of those three is missing, the system is not ready for August 2026.
If minors can access the service, the rules tighten regardless of which tier the recommender sits in. Two anchors matter.
DSA Article 28(2) prohibits online platforms from presenting targeted advertising based on profiling when they are aware "with reasonable certainty" that the user is a minor. The threshold is awareness, not certainty — the obligation is triggered as soon as the platform has the information that would let it conclude the user is under 18.
The European Commission's Guidelines on the protection of minors under Article 28(1) DSA of 14 July 2025 are the operational document. The guidelines specify that platforms must apply the most restrictive default settings for minors, must not enable behavioural profiling by default, must prioritise explicit user preferences (selected interests, direct feedback) over behavioural profiling, and should only use minors' activity across or beyond the platform when it serves the best interests of the minor. The guidelines also push toward making non-profiling recommender options the default for minors where appropriate, which effectively extends DSA Article 38 (a VLOP-only obligation in the legal text) into a baseline for any platform that has minor users.
For Tier 2 platforms with engagement-optimised feeds and minor user populations, the TikTok preliminary findings of 6 February 2026 are the case to watch. The Commission's argument is that infinite scroll combined with personalised recommendations creates a systemic risk to the mental and physical well-being of minors that the platform did not adequately assess. The remedies the Commission proposed (disabling infinite scroll, screen-time breaks, recommender adaptation) are the operational template for any platform in the same shape.
The checklist is not a substitute for the tier walk above, but it is the practical artefact that comes out of it.
The August 2026 AI Act deadline pulls a Tier 3 system across the line whether or not the team is ready. The DSA enforcement timeline (X €120M on 5 December 2025, TikTok preliminary findings on 6 February 2026) shows that the Commission is treating recommender-system obligations as live, not aspirational. For any team building or deploying a recommender that touches EU users in 2026, the tier walk above is the conversation worth having before the next sprint plan, not after the first complaint.
The trigger question is settled. The harder question is which assessment, and when. EDPB Opinion 28/2024, CNIL July 2025, and the Article 27(4) FRIA carry-over.
Five HR use cases for AI, each with the rule that applies, the 2024-2025 enforcement that shaped it, and the question to ask the vendor before you sign.
What the DSA, GDPR Article 8, the AI Act, and COPPA 2.0 require when your AI feature is accessible to minors. Walks the four regimes one at a time, with 2025-2026 enforcement and the AI-training-consent rule most builders missed.
Free tool · live
AI Data Flow Checker
Map how personal data flows through your AI integrations and spot the privacy risks before they spot you.