Four named confidentiality failure modes for AI meeting notetakers, anchored in the Brewer v Otter and Cruz v Fireflies 2025 cases and the EU consent stack.
In February 2025, a man named Justin Brewer was on a sales call in San Jacinto, California. The other side of the call was using Otter. Brewer was not an Otter customer, had never agreed to Otter's terms of service, and had no idea his voice was being recorded and uploaded to a third-party SaaS account in the United States. By August 2025 he was the lead plaintiff in a federal class action.
In November 2025, a woman named Katelin Cruz joined a virtual meeting where the host was using Fireflies.AI. On 18 December 2025, the United States District Court for the Central District of Illinois received Cruz's complaint, brought under the Illinois Biometric Information Privacy Act (BIPA). It alleged that Fireflies recorded, analysed, transcribed, and stored the voiceprints of all meeting participants, including the ones who had never created an account, never agreed to terms of service, and never given written consent for biometric data collection.
These two cases are not the whole story, but they are the way to read the meeting-transcription problem in April 2026. The pattern they describe is not new. AI notetakers have been auto-joining client meetings across consultants, lawyers, sales teams, and small businesses for at least three years. What is new is that US courts are starting to ask hard questions, and the EU framework (GDPR plus the ePrivacy Directive) has actually been strict on this all along. Most teams just have not applied it.
This piece is the failure-mode version. Four named confidentiality breaks that AI meeting notetakers produce, each anchored in either a 2025 lawsuit or an EU regulation, with the workable default at the end.
Before the failure modes, the vendor table. Read the contract for any tool you actually use; the marketing pages move faster than the contracts and sometimes contradict them.
| Tool | Trains on user data? | All-party consent flow | EU residency | 2025 legal exposure |
|---|---|---|---|---|
| Otter | Yes by default (uses "de-identified" recordings); enterprise tier excludes imported documents | Weak. Bot joins meetings; in-meeting banner only | US default | Brewer v Otter, federal class action filed August 2025 in California |
| Fireflies | No (claims 0-day retention; not used for AI training) | Stronger. Pre-meeting opt-out email; bot does not join if anyone opts out | DPF certified | Cruz v Fireflies.AI Corp, BIPA class action filed 18 December 2025 in Illinois |
| Zoom AI Companion | No across all deployment options | In-meeting prompt; account-level controls | Yes (ZM+ default for European customers; ZMO fully managed by Zoom) | None of note as of April 2026 |
| Microsoft Teams Copilot | No (Microsoft commercial DPA terms) | Tenant-controlled; admin can require explicit consent | EU Data Boundary applies | Standard commercial terms |
| Google Meet "take notes for me" | No under Workspace DPA | In-meeting indicator | Workspace regional terms | Standard Workspace terms |
Three observations the table makes obvious. First, the strongest no-training position belongs to Zoom AI Companion, with Microsoft Teams Copilot a close second. Both rely on the platform vendor's broader enterprise contract, which is usually an asset rather than a risk. Second, Otter is the outlier on training, and the de-identification claim is exactly the kind of statement EDPB Opinion 28/2024 (17 December 2024) treats as case-by-case and demanding. Third, Fireflies has the strongest consent-flow design and is still being sued over voiceprints, which tells you something about the gap between consent flows and biometric law.
This is the failure mode at the heart of the August 2025 Otter complaint, and it is the one the EU framework treats most strictly.
The complaint, filed in the Northern District of California, names plaintiff Justin Brewer and alleges violations of the federal Electronic Communications Privacy Act (ECPA), the federal Computer Fraud and Abuse Act (CFAA), the California Invasion of Privacy Act (CIPA), and California's Unfair Competition Law, plus common-law privacy torts. The factual core is narrow: a sales call in February 2025, an Otter bot joining the call because the other participant used Otter, and a recording uploaded without Brewer's consent. The legal novelty is that Brewer was a non-user. He had no contractual relationship with Otter, no exposure to its terms of service, and no opportunity to opt out before the recording happened.
For the EU read, this is not a hard case. Audio of a person speaking is personal data under GDPR. ePrivacy Directive Article 5(1), implemented in every member state, protects the confidentiality of communications and requires consent from all parties to the recording or interception of a communication. National implementations vary in the details, but the floor is the same: a bot joining a call and showing a banner is not affirmative consent from the other side of the line.
The four conditions for an EU-defensible recording, in the order they break:
The Brewer case's value for an EU compliance read is not the legal theory. It is the operational specificity. Read the complaint, picture the February 2025 sales call, and ask whether your team's recording flow would have caught the non-user problem. For most teams in 2026, the answer is no, because the consent flow lives inside the host's account and the non-user is invisible to it.
This is the failure mode at the heart of the December 2025 Fireflies complaint, and it is the one most teams do not realise is biometric law rather than ordinary recording law.
The Cruz complaint was filed on 18 December 2025 in the United States District Court for the Central District of Illinois. The triggering meeting took place on 18 November 2025. The legal hook is the Illinois Biometric Information Privacy Act (BIPA), which has three operational requirements for capturing biometric identifiers: written notice of the collection, written informed consent before the collection, and a published retention schedule with destruction guidelines. The complaint alleges Fireflies satisfied none of the three for non-account-holding meeting participants, and that voiceprints were captured for "all meeting participants, including individuals who never created Fireflies accounts, never agreed to Fireflies' Terms of Service, and never executed any written consent authorising biometric data collection".
For the EU read, this is the harder case for the vendor and the easier case for the controller. A voiceprint is biometric data under GDPR Article 4(14). When used to uniquely identify a person, it becomes special category data under Article 9(1). Article 9(2) lists the limited grounds on which special-category processing is permitted; for a commercial meeting context, the only realistic ground is explicit consent under Article 9(2)(a). "Explicit" is a higher bar than ordinary consent. It requires an affirmative statement, specific to the biometric processing, given before the processing starts. A pre-meeting banner does not meet the standard.
The corollary for the controller (you, not the vendor) is that even if the vendor handles the notice and consent perfectly, you still have to verify that your contract carries the Article 9 commitment through. Most vendor terms address Article 6 personal data processing in the standard DPA section and stay quiet on Article 9. If the contract does not mention biometric data, the vendor's compliance posture for ordinary recording does not extend to voiceprints, and the gap is the controller's problem to close.
This is the failure mode that the lawsuits do not address but that bites hardest in professional services. It is also the failure mode where the vendor table above is misleading, because the failure happens regardless of which vendor.
A common pattern: your firm is a consultant, your client invites you to a Zoom meeting, the client has Otter (or Fireflies, or Read.AI) configured to auto-join their meetings, the bot joins, the recording happens, the transcript lands in the client's account. Your firm's confidentiality posture is now structurally broken because the recording sits outside your control. You do not own the storage decision. You do not own the retention period. You do not own the access list. If the client's account has a security incident, the recording of your confidential conversation is in scope for the incident. If the client is later subpoenaed in unrelated US litigation, the recording is in scope for the discovery.
For lawyers, the cross-tenant transcript problem is the most acute. The duty of confidentiality is older than GDPR, older than ePrivacy, and stricter than both. Most EU bar associations have issued explicit guidance against using unmonitored AI notetakers in client matters, and the reasoning is exactly the cross-tenant problem: privilege does not survive a third-party processor relationship that the lawyer cannot control. The German DAV, the French CNB, the English Law Society, and the Belgian Ordre have all published cautionary notes in 2024 and 2025. The exact wording differs; the practical effect is the same.
For consultants, the analysis is contract law rather than professional code, but the answer is similar. Most consultancy NDAs require the consultant to maintain confidentiality through "appropriate technical and organisational measures", and a tool that is in the client's control rather than the consultant's does not satisfy that. The consultant who lets a client-controlled bot record a sensitive engagement is breaching the NDA whether or not the client clicks consent.
The operational fix for the cross-tenant problem is not a consent flow. It is a contractual carve-out. Either the engagement letter says "no recording in any meeting under this engagement, by either party", or it says "recording only via [named tool] under [named contract]". The third option ("recording allowed at the client's discretion") is the one most teams default to and the one that produces the breach.
This is the failure mode that runs through the Otter case and through every vendor that trains on user data in the meeting category. It is also the failure mode that EDPB Opinion 28/2024 spent the most pages on.
The pattern is: a vendor uses recordings to improve its model. The recordings contain personal data. The vendor claims that the data has been "de-identified" or "anonymised" before training, and that the resulting model is therefore not personal data. The Opinion's response is that anonymity claims are case-by-case and that the burden is on the controller to demonstrate the model is not personal data, with reference to the means reasonably likely to be used to re-identify. For voice data, this is an extremely high bar, because voiceprints are themselves identifiers and the reidentification risk from a voice model is structural rather than incidental.
The Opinion is framed around model developers. The deployer-side reading is the same: if a vendor's de-identification claim is the only thing standing between your client conversation and the vendor's model, the claim has to be tested under the Opinion's three-step framework, and the test is hostile. (See the vector embeddings piece for the parallel case on text embeddings; the voice case is similar but with a higher bar because of Article 9.)
For the Otter case specifically, the de-identification claim is one of the points the August 2025 complaint contests directly. The claim is that Otter's "de-identified" recordings are still identifiable because the speaker's voice is itself the identifier, and that the training use of those recordings therefore requires explicit consent under both California law and (by extension) any EU reading. The case will not produce a binding EU precedent regardless of outcome, but the underlying analysis travels.
For most professional services and B2B teams in 2026, three default policies are viable. Two of them work. One of them does not.
Ban. No auto-notetakers in any client meeting. Manual notes only. Boring, low risk, fully consistent with bar association guidance for lawyers, and easy to enforce. The cost is the productivity loss of taking notes by hand. The benefit is the elimination of all four failure modes simultaneously.
Whitelist. The AI feature inside your existing meeting-platform contract is allowed. Everything else is disabled at the IT level. For Zoom shops this means Zoom AI Companion only. For Microsoft shops this means Teams Copilot only. The recording and the transcript live inside the platform contract you already signed and the consent flow is the platform's, not a third-party bot's.
Consent gate. Any approved tool is allowed but only after documented all-party consent for the specific meeting, only for non-confidential content, and only with the cross-tenant question answered in advance. This is the option most teams default to and the option most prone to drift. It is operationally hard, audit-hostile, and almost never survives a real bar association complaint.
The practical default for almost everyone is ban-plus-whitelist. Pick the AI feature already inside your meeting-platform enterprise contract. Disable everything else through your MDM and your meeting platform admin console. (Both Zoom and Microsoft Teams support blocking third-party meeting bots at the admin level, which is the single largest IT control for this category.) For the meetings where ban-plus-whitelist still leaves the cross-tenant problem (the client has Otter; you cannot stop them), add the contractual carve-out to the engagement letter.
Three things to do this week, in order.
First, open the admin console for your meeting platform and verify that third-party meeting bots are blocked at the tenant level. For Zoom, this is in Account Management → Account Settings → Meeting → "Allow meeting participants to use third-party meeting tools". For Microsoft Teams, it is in the Teams admin centre under Meeting Policies → "Allow third-party apps". This is a one-click change and it removes the largest single class of meeting-transcription failure mode without a single conversation about consent.
Second, write a half-page meeting recording policy and add the two-line consent script to your meeting templates. The half-page is what survives a regulator audit; the two lines are what gets used in practice.
Third, calendar a quarterly read of the named tools your team uses for meeting transcription. The Otter case and the Fireflies case both moved fast (Brewer was filed in August 2025 from a February 2025 incident; Cruz was filed in December 2025 from a November 2025 incident) and the next case in this category will move at similar speed. The vendor table at the top of this piece will be out of date within six months. Re-verify it.
Confidentiality is older than GDPR and older than ePrivacy. The reason the AI meeting transcription category breaks it is not that the law is unclear. It is that the consent flow lives inside one party's account while the conversation happens across the line. The fix is structural: the bot does not get to join in the first place.
Three tiers of shadow AI in 2026: the browser tab, the in-SaaS toggle, the OAuth-scoped agent. IBM puts the breach delta at $670K, Article 4 enforcement starts 2 August 2026, and a register beats a ban.
A time-anchored runbook for handling the most common AI incident in 2026: a team member pasted personal data into a consumer-tier ChatGPT account. First hour through the policy that stops the next one.
A trace-walk of one OpenAI API call through every entity in the cascade, with the Article 28, CLOUD Act, Article 48, and DMA layers stacked on top.
Free tool · live
AI Data Flow Checker
Map how personal data flows through your AI integrations and spot the privacy risks before they spot you.