Between January 2025 and February 2026, 20 documented AI app breaches exposed hundreds of millions of records. Four configuration mistakes explain nearly all of them.
Between January 2025 and February 2026, security researchers documented 20 data breaches at AI-powered applications. Chat & Ask AI exposed 406 million records because their Firebase rules were set to public read. McDonald's AI hiring chatbot left 64 million job applicant records accessible behind a password of "123456." An AI children's toy exposed 50,000 chat transcripts because any Gmail account was granted admin access.
Nobody bypassed encryption. Nobody exploited a zero-day. In almost every case, the data was sitting in the open. The attacker's most complex tool was a web browser.
Three independent research projects converged on the same picture. CovertLabs scanned 198 iOS AI apps and found 98.9% had Firebase misconfigurations. Cybernews analyzed 38,630 Android AI apps and found 72% contained hardcoded secrets. Escape.tech audited 5,600 apps built with AI coding tools and found over 2,000 vulnerabilities. IBM's 2025 Cost of a Data Breach Report puts the AI-specific average at $4.80 million per incident, affecting 73% of companies surveyed. The breaches are not isolated incidents. They are the visible part of a configuration-layer collapse across AI apps shipped in 2025.
Four root causes explain nearly all of them. Each one has a fix that takes minutes.
Firebase ships with test-mode security rules: allow read, write: if true. These rules are intended for the first hour of development. They mean anyone on the internet can read and write your entire database.
Chat & Ask AI (406 million records) had exactly this configuration in production. The Tea dating app leaked 72,000 government IDs and 1.1 million private messages through the same misconfiguration. When researchers scanned 198 iOS AI apps, 196 had Firebase misconfigurations. 42% of the exposed Firebase databases showed evidence that attackers had already found them.
The fix is two minutes in the Firebase Console.
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /users/{userId}/{document=**} {
allow read, write: if request.auth != null
&& request.auth.uid == userId;
}
}
}
Apply equivalent rules to Realtime Database, Storage, and Functions separately. Each has its own ruleset. Each defaults to permissive.
Run the open-source Firehound scanner against your own app bundle before launch. It enumerates the Firebase configuration an attacker could pull from your client and tries the obvious read attacks. If it finds anything, fix it before you ship. The tool exists because the failure mode is so common, and the cost of running it once is roughly the cost of opening the GitHub page.
Supabase tables have RLS disabled by default. Without RLS, the Supabase anon key (which is in your client-side JavaScript) grants read and write access to every row in every table.
Lovable, the AI app builder, had 303 Supabase endpoints across 170+ apps with no RLS, exposing PII, payment data, and developer API keys (CVE-2025-48757). Moltbook, an AI agent platform whose founder publicly said he wrote zero lines of code, exposed 4.75 million records including 1.5 million API tokens within 3 days of launch. The fix was two SQL statements.
-- Enable RLS
ALTER TABLE your_table ENABLE ROW LEVEL SECURITY;
-- Users can only read their own rows
CREATE POLICY "Users read own data"
ON your_table FOR SELECT
USING (auth.uid() = user_id);
-- Users can only insert their own rows
CREATE POLICY "Users insert own data"
ON your_table FOR INSERT
WITH CHECK (auth.uid() = user_id);
Verify that the only Supabase key in your client-side JavaScript is the anon key. The service role key should never leave your server. Test your policies by simulating unauthenticated and cross-user queries. Supabase's production checklist walks through each step.
Embedding credentials in client-side code or application binaries. A Cybernews audit of 38,630 Android AI apps found 72% contained hardcoded secrets, averaging 5.1 per app. That is 197,092 unique secrets in the wild: Google Cloud credentials, Stripe payment keys, AWS access keys, communication platform tokens. Wondershare RepairIt had hardcoded cloud SAS tokens (CVSS 9.1 and 9.4) that allowed attackers to modify the AI models the app auto-downloads and executes.
Search your codebase for hardcoded credentials. Run an automated scanner on every commit.
# Install and run trufflehog
trufflehog filesystem --directory=. --only-verified
# Or gitleaks
gitleaks detect --source=.
Move all secrets to environment variables or a secrets manager. Never commit .env files. Store only .env.example with placeholder values. Rotate any key that has ever appeared in version control. The Cybernews audit found that many exposed keys in Android apps were still active. Assume leaked keys are compromised.
DeepSeek left a ClickHouse database on the internet with no password. McDonald's McHire had a test admin account with credentials 123456/123456 active since 2019. Bondu's AI children's toy granted admin access to any Gmail account. The Chattee AI companion app ran an unauthenticated Kafka broker exposing 43 million intimate messages.
The pattern is uniform: a service intended for internal use was deployed to a public network without an authentication layer, and stayed there long enough for someone to find it. The fix is the same in every case. Default-deny on the network boundary. Real authentication on the application boundary. Test accounts get deleted before launch, not after the breach.
Before shipping any AI app, verify these ten things:
None of these are hard. None require security expertise. They require someone to check before shipping. If your team uses an AI coding assistant, the last item on this checklist is the new one: the AI optimized for working, not for secure. Verify the security configuration manually.
I think vibe coding is going to be the dominant security failure mode of 2026.
Escape.tech's audit of 5,600 apps built with AI coding tools found over 2,000 vulnerabilities. Moltbook is the clearest single case: zero lines of hand-written code, compromised within 3 days of launch. The pattern is consistent across the 20 breaches: the developer evaluates by running the app, not by reading the code. If the UI works, it ships. Nobody checks whether RLS is enabled or Firebase rules are locked down.
Whether AI coding tools will start defaulting to secure configurations under regulatory pressure, or whether they will keep optimizing for speed-to-demo, is genuinely uncertain. The current trajectory points to the latter. Lovable, Cursor, and Replit Agent each ship with permissive defaults that get committed unchanged. Until the defaults change, the configuration check has to be human.
A second thing the data shows: speed of fix matters. 42% of the exposed Firebase databases that Cybernews found showed evidence of prior compromise. The breach window for many of these apps was weeks or months. The same root causes will repeat as long as AI tools generate permissive configurations and developers ship without reviewing.
A third thing, looking up the AI supply chain rather than at the apps themselves: model providers are not exempt from this pattern. On 26 November 2025, OpenAI confirmed a vendor compromise that exposed names, emails, locations, and technical details of OpenAI's business customers. The breach was at the vendor, not at OpenAI. The lesson is that the "AI app" security boundary now includes every sub-processor in the chain. If you ship an AI feature on a third-party model, your blast radius includes that provider's vendors.
Children's data carries extra consequences. Bondu exposed 50,000 chat transcripts from children aged 3 to 9 with an AI toy. A US Senator sent a formal letter demanding answers. The FTC finalized major COPPA Rule amendments in April 2025 with a compliance deadline of April 2026, prohibiting indefinite retention of children's data and requiring separate parental consent for AI training. If your app is accessible to minors, verify COPPA compliance before launch — and the GDPR equivalent if any of your users are in the EU.
If you have an AI app in production, check one thing first. Are your database security rules still in development mode? For Firebase, check the rules in the console. For Supabase, check RLS status on every table. This is a five-minute check. If you find test-mode rules in production, fix them before your next deployment. Every breach on the 20-incident list was preventable with checks that take less time than the breach takes to detect.
A practical, surface-by-surface audit recipe for finding personal data flowing to AI services. Covers prompt templates, observability defaults, embedding pipelines, and the limits of audit-by-grep in agent mode.
What Copilot, Cursor, Claude Code, and Windsurf actually do with your code after the March 2026 Claude Code source leak. Secrets, tier gaps, and GDPR angles.
Downstream incident runbook for the moment your AI vendor's breach email arrives. The 72-hour clock from your awareness, scoping with API logs only, and the three real 2025-2026 cases.
Free tool · live
AI Data Flow Checker
Map how personal data flows through your AI integrations and spot the privacy risks before they spot you.