What the DSA, GDPR Article 8, the AI Act, and COPPA 2.0 require when your AI feature is accessible to minors. Walks the four regimes one at a time, with 2025-2026 enforcement and the AI-training-consent rule most builders missed.
Most AI products are not designed for children. Most AI products are accessible to children anyway. The test that matters in 2026 is not whether you intended children to use the service. It is whether children can reach it.
Two recent enforcement events make the point sharper than any regulatory text.
On 19 May 2025, Italy's Garante fined the developer of the Replika AI chatbot €5 million. The company had declared that minors were excluded from the service. But until February 2023 the only "verification" was a name and an email address, which any 11-year-old can provide. The Garante's investigation found that minors had been creating accounts and engaging with a chatbot designed for emotional companionship. The fine was not for marketing to children. It was for failing to keep them out of a service that was foreseeably going to attract them. In parallel, the Garante opened a new investigation into how Replika trained its underlying model, which is likely to feed into a follow-on action.
On 29 January 2026, the security researchers Joseph Thacker and Joel Margolis disclosed that bondu, a company selling an interactive AI plush toy aimed at children aged three to nine, had a web console open to anyone with a Google account. About 50,000 transcripts of children's conversations with the toy were accessible, along with names and dates of birth. WIRED published the story the same day. On 3 February 2026, US Senator Maggie Hassan sent the company's CEO a formal letter demanding answers and gave the company until 23 February to respond. The CEO took the console offline within minutes of disclosure. The transcripts had still been there. The data model that retained them had been the design, not the bug.
Both incidents share a common shape. A product builder thought "we don't target children" was a defence. The regulator, the legislator, and the press said it was not. The four regimes that follow are the rules that turn that intuition into a duty.
Article 28(2) of the Digital Services Act prohibits showing minors advertising "based on profiling" once an online platform is "aware with reasonable certainty" that the recipient is a minor. There is no balancing test. There is no legitimate-interest override. The duty falls on every online platform under the DSA's scope, which is most consumer-facing services with users in the EU.
On 14 July 2025 the European Commission published its long-awaited Guidelines on Article 28(1) DSA, the broader child-protection clause. The guidelines are not legally binding but the Commission will use them to assess compliance, which in practice makes them the operational test.
The practical translations a developer can act on:
The penalty band for DSA violations runs up to 6% of global annual turnover. For very large online platforms (VLOPs) it goes higher. The Commission has been willing to open formal proceedings against major platforms on protection-of-minors grounds, and the 2025 guidelines give national digital services coordinators a clearer playbook to do the same.
GDPR Article 8 says that an information society service offered directly to a child can rely on the child's consent only if the child is at least 16. Below 16, the processing is lawful only if the parental responsibility holder consents or authorises it. The complication, and this is the part that breaks every cross-EU launch plan, is that Article 8(1) lets each member state lower the threshold to as low as 13.
Twenty-seven different age thresholds in practice. Belgium, Denmark, Estonia, Finland, Latvia, Malta, Portugal, Sweden, and the UK (under UK GDPR) sit at 13. Italy, Austria, Bulgaria, Cyprus, Spain, and a few others sit at 14. France and Greece use 15. Germany, Ireland, Luxembourg, the Netherlands, Romania, and Slovakia stay at the GDPR default of 16.
Two practical points that often get missed.
The parental consent has to be "verifiable" in the meaningful sense. A checkbox saying "I am the parent" does not qualify. Email-loop confirmation, payment-card verification, an SMS challenge to a parent's number, or a credit-bureau-style identity check are the methods most national authorities have endorsed in their guidance. The choice is yours, the proportionality has to be defended, and the audit trail has to exist.
Article 8 is not the only clause that bites for children. Article 22 (automated decision-making with legal or similarly significant effect) is heavier when the data subject is a child, because the standard of "necessity" is harder to meet. The CJEU's SCHUFA ruling (C-634/21, 7 December 2023) and the Dun & Bradstreet ruling (C-203/22, 27 February 2025) raise the bar on what counts as "meaningful information about the logic" of automated decisions, and the bar is highest where the affected user is a minor.
Article 5(1)(b) of the EU AI Act prohibits the placing on the market, putting into service, or use of an AI system that "exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person pertaining to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm."
The clause has been in force since 2 February 2025. The penalty band for prohibited practices under Article 99(3) is the highest in the regulation: up to €35 million or 7% of global annual worldwide turnover, whichever is higher. There is no notified-body workaround and no good-faith carve-out.
For builders, three sub-questions matter.
First, what counts as a "vulnerability due to age." The article is not limited to under-18s but the Commission's commentary, the AI Office's FAQ, and the academic reading all converge on the view that minors are the central case. Limited capacity to recognise manipulative AI practices is a cited example. So is inability to assess long-term consequences of in-app choices.
Second, what counts as "materially distorting behaviour." A recommender that nudges a 12-year-old towards in-app purchases through artificial urgency is a clear case. A chatbot that escalates emotional engagement to increase session length with a 14-year-old is a less clear case but has been flagged by the European Commission's Joint Research Centre in its 2025 work on minors and generative AI.
Third, what counts as "significant harm." The threshold here is contested. Financial harm is the easiest to argue. Psychological harm is the hardest. The intermediate ground (well-being, social isolation, sleep disruption, exposure to age-inappropriate content) is where most enforcement is likely to land in the next 18 months. (I think the regulators will be more aggressive on Article 5(1)(b) than the AI Act's two-tier "high-risk" framework would suggest, because the prohibited-practices band is the only place the AI Act lets DPAs and AI surveillance authorities act without waiting for the August 2026 obligations to bite. There is no clean test case for it yet, which is itself the point.)
Education is a separate, additional consideration. AI systems used in admissions, learning-outcome evaluation, education-level assessment, or test-behaviour monitoring fall under Annex III category 3 of the AI Act and become high-risk on 2 August 2026. High-risk triggers conformity assessments, technical documentation, human oversight, and Article 12 logging. If your AI feature is in EdTech, you are on both tracks at once.
The FTC finalised the most significant amendments to the Children's Online Privacy Protection Act since 2013 on 16 January 2025. The Final Rule was published in the Federal Register on 22 April 2025 and became effective on 23 June 2025. The compliance deadline for the substantive obligations is 22 April 2026. Operators of websites and online services directed to children under 13, or with actual knowledge that they are processing the data of children under 13, have to be ready by then.
Three changes matter for AI builders specifically.
The first is the expanded definition of "personal information." It now explicitly includes biometric identifiers used for automated recognition: voiceprints, facial templates, retina and iris patterns, fingerprints and handprints, genetic data including DNA sequences, gait patterns and faceprints. If your AI feature processes a child's voice for any purpose (the bondu plush toy is the obvious case, but think also of in-app voice assistants, AI tutors, accessibility features, voice-cloning toys), that voiceprint is now COPPA-covered personal information. The voice is the data.
The second is the retention rule. COPPA 2.0 prohibits the indefinite retention of children's personal information and requires operators to maintain a published, written data retention policy that specifies what is collected, why, and the time-frame for deletion. Personal information from a child cannot be kept longer than reasonably necessary for the documented purpose. This is closer to GDPR's storage-limitation principle than COPPA used to be, and it lands on operators that may never have built a deletion job at all.
The fourth practical point, less new but worth restating: parental consent under COPPA still has to be "verifiable" in the FTC sense. A checkbox is not enough. Acceptable methods include credit-card or debit-card verification, government-ID matching, video conference with trained personnel, and the "email-plus" method (parental email confirmation plus a follow-up confirmation step). The FTC has not loosened these methods in 2025.
US federal action sits on top of a fast-moving state layer. The piece a builder cannot ignore right now is California Senate Bill 243, which became effective on 1 January 2026. SB 243 is the first state law in the US that directly regulates AI companion chatbots when used by minors. It requires:
The private right of action is the part most companies underweight. It means a plaintiff's lawyer in California can bring a class action without waiting for the state attorney general to act. The Replika fine was a regulator decision. The California regime gives any affected family a direct enforcement path.
Federal bills moving through Congress as of April 2026 include the KIDS Act, the SAFEBOTs Act, and the Youth AI Privacy Act. None has passed at the time of writing, and the timing on any of them is genuinely unclear, so the right move is to track them rather than build to them. The harder fact is that the Texas, New York, Connecticut, and Colorado state legislatures are all working on parallel bills, several of which would impose obligations broader than COPPA's. The set of compliance obligations for US AI products with under-18 users is going to look very different by the end of 2026 than it did at the end of 2025.
Every regulation in this set requires you to know, with some confidence, whether a user is a minor. Knowing requires asking. Asking requires collecting. Collecting requires lawful basis, minimisation, and retention rules. The control that protects children's data is itself a processing operation that has to be designed.
The EDPB's Statement 1/2025 on Age Assurance, published in February 2025, is the document that sets the principle. The headline rule: collect only whether a threshold is met, never the exact age. A binary "user is over 16" answer is enough for almost every Article 8 use case. Storing a date of birth is not minimisation, it is excess.
The statement runs through the methods in roughly increasing order of intrusiveness: self-declaration, capacity-based assessment, age-estimation by AI on biometric inputs, age-verification with hard identity. The principle is that you use the least intrusive method that is proportionate to the risk. For a low-risk service (a recipe app with optional AI chat), self-declaration plus a friction layer is usually defensible. For a high-risk service (a companion chatbot that escalates emotional engagement), the bar is higher, but "build a biometric database to estimate age" is the answer the EDPB explicitly rejects.
Three concrete things to avoid.
Do not store a date of birth when a binary is enough. The privacy impact compounds quickly because date of birth is a quasi-identifier in its own right.
Do not estimate age by collecting selfie biometrics through a third-party vendor without a Data Protection Impact Assessment that names the vendor and the cross-border transfer. The EDPB Statement 1/2025 specifically warned against this pattern.
Do not build the age check on the user-facing side without also building the consequences (the non-profiling default, the parental consent flow, the deletion job, the audit log) on the back end. The check produces a flag. The flag has to do something.
Block out a focused half-day on the children's-data layer of any AI feature you are about to ship to users in the EU, the UK, or the US. Walk the four regimes in order, one Slack thread per regime, with the named owner whose job it is to answer the question:
AI memory is profiling, and the deletion story is broken. The two layers, the NYT court order, the CIMemories benchmark, memory poisoning, and what survives a supervisory audit.
GDPR, the DSA, and the AI Act apply different rules to different recommender systems. The three stakes tiers, the case law that reshaped them in 2025, and what each tier actually has to do.
The trigger question is settled. The harder question is which assessment, and when. EDPB Opinion 28/2024, CNIL July 2025, and the Article 27(4) FRIA carry-over.
Free tool · live
AI Data Flow Checker
Map how personal data flows through your AI integrations and spot the privacy risks before they spot you.