When AI Listens Too Closely

The Otter.ai Lawsuit and the Future of Compliance

AI listens too closely headline
Image by istock/ peterschreiber.media

Bryan Driscoll

November 10, 2025 12:00 PM

In August 2025, Otter.ai, one of the most widely used AI transcription tools, was hit with a federal class action in California alleging that it secretly recorded private conversations and repurposed them to train its machine learning models. The plaintiff, who wasn’t even an Otter user, claims his conversations were captured during meetings simply because the tool was running.

The case, Brewer v. Otter.ai, turns on familiar statutes suddenly made unfamiliar by automation—the Electronic Communications Privacy Act, the Computer Fraud and Abuse Act, and California’s Invasion of Privacy Act, which requires all-party consent to record.

Yet this dispute extends far beyond statutory interpretation. It exposes a deeper rift in U.S. privacy law, one split between states that treat consent as an opt-in right and those that still see it as a technicality.

The Lawsuit

The complaint, brought by California resident Justin Brewer, alleges that Otter’s meeting software joined virtual calls and transmitted the audio back to company servers without consent. Brewer’s conversations, captured simply because another participant used Otter’s service, became part of a dataset used to improve the company’s transcription models.

That distinction—that nonusers were recorded—marks the legal novelty. It extends traditional wiretap principles into the realm of automated intermediaries. The core question is whether a machine, acting autonomously on behalf of a user, can lawfully intercept communications without the knowledge or consent of other participants.

Privacy law was never designed for this configuration. Courts are being asked to determine not just whether Otter violated statutory recording laws, but whether an AI can serve as an uninvited party to a private exchange.

Otter’s terms of service instruct customers to ensure they have necessary permissions, effectively outsourcing compliance to users. That strategy, familiar in the tech sector, may not survive judicial scrutiny when the vendor builds, controls, and profits from the recording infrastructure itself. Brewer’s lawyers argue that this structure creates a system of plausible deniability: Otter reaps the benefit of user data while shifting the risk to its customers.

When ‘Listening’ Becomes Eavesdropping

In traditional wiretap cases, the act of interception is deliberate. Someone chooses to record, and someone else can refuse. Otter’s model removes both steps. Its software automatically joins meetings and begins transcribing, often without visible notice or the opportunity for participants to object.

That automation collapses the concept of informed consent. Consent requires awareness, yet Otter’s design leaves participants unaware that their words are being transmitted to a third-party server. Courts and regulators have long treated undisclosed recording as inherently deceptive—a principle rooted in decades of wiretap and privacy rulings.

The absence of an audible or visual cue matters. It undermines the expectation that conversations, particularly in private or professional contexts, remain confined to those in the room, even if that room is virtual.

The problem becomes even thornier when viewed through the lens of privilege and confidentiality. AI transcription tools don’t distinguish between a casual team meeting and a legal strategy session. Once a conversation is captured and stored externally, privilege may be jeopardized, and sensitive material can be exposed to unauthorized access or later reuse.

Why Silence Equals Exposure

In the Otter case, recordings weren’t just stored—they were reportedly used to refine the company’s AI models, creating yet another layer of unconsented use.

This evolving definition of listening has profound implications for lawyers and their clients. Every state, firm, or company that uses AI-driven transcription now operates in a fluctuating risk environment. A single conversation could traverse multiple jurisdictions with different consent thresholds.

In that reality, silence is not neutrality—it’s exposure. Whether a court ultimately decides that Otter’s system crossed the line, the case already signals that future compliance will hinge less on written policies and more on whether every person in every conversation actually knows an AI is in the room.

Beyond the Recording

If a company collects voice data to provide a service, can it then repurpose that data to improve its technology? Otter’s privacy policy says yes, describing the use of de-identified recordings for model development. But courts and regulators have increasingly rejected the idea that removing names or identifiers renders data truly anonymous.

Voice data adds another wrinkle: tone, cadence, and accent can all re-identify a speaker, and the substance of a conversation can point directly back to a business, client, or case. Once that information is used to train an AI model, it’s effectively impossible to extract. What began as a transcription becomes a permanent contribution to a private company’s algorithmic memory.

This secondary use also blurs the boundary between service provider and data controller. Otter’s customers believed they were buying a productivity tool, not supplying training data.

Crossing the Line

Yet the company’s system allegedly converted every word captured into raw material for future commercial gain. That practice echoes prior controversies in the AI space, where companies like OpenAI and Stability AI have faced lawsuits for scraping creative works without consent. The legal question is the same: does technological innovation excuse unconsented reuse?

The answer, increasingly, is no. Regulators are tightening their scrutiny of data reuse across industries. The FTC’s 2021 settlement with photo app Everalbum, which required deletion of all facial-recognition models built on improperly obtained data, signaled a new willingness to treat training as a form of processing subject to consent obligations. The logic applies here: if recordings are used to develop new commercial capabilities, that’s not incidental—it’s exploitation.

Any client deploying AI systems that collect or analyze communications must now evaluate whether those systems create derivative datasets beyond their intended purpose. Data reuse transforms legal exposure from a compliance oversight into a potential act of conversion or misappropriation.

A Patchwork of Privacy

The Otter.ai case isn’t unfolding in a legal vacuum—it’s playing out in a country where privacy protections depend largely on geography. California’s consent laws are among the nation’s strictest, requiring every participant in a conversation to approve recording. Cross the border into Nevada or Texas and one-party consent suffices. The result is a compliance landscape where the legality of a single AI notetaker can change mid-meeting depending on who joins the call and where they sit.

This fragmented reality isn’t limited to consent. It extends to how states define personal data, regulate AI training, and enforce consumer rights. California’s Consumer Privacy Act (CCPA) grants individuals the right to know, delete, and opt out of the sale or use of their information. Colorado, Connecticut, Utah, and Oregon have passed similar frameworks, each with its own notice and consent standards.

That vacuum is why litigation like Brewer is becoming de facto policymaking. Courts, not Congress, are determining how privacy statutes drafted decades ago apply to AI. Each case sets precedent in isolation, creating a patchwork not only across states but within them.

Complex Compliance

Employers that operate nationally must now assume that every AI tool they deploy could implicate multiple, conflicting laws—wiretap, consumer protection, employment, and data privacy—all at once. The complexity of compliance is outpacing the ability of existing frameworks to adapt.

For in-house counsel and outside advisors, this uncertainty is its own liability. Many companies mistakenly assume vendors manage compliance through click-through agreements or privacy disclosures.

But as Otter’s contract language shows, those documents often push responsibility back to users. Without federal harmonization, even sophisticated organizations face regulatory whiplash: one state’s compliance standard may expose them in another.

A Precedent for the Age of Listening Machines

Brewer v. Otter.ai may be one lawsuit, but it represents a generational moment in technology law, the first serious test of what happens when artificial intelligence listens in. The case’s power lies not in its novelty, but in its inevitability.

Tools like Otter.ai weren’t built to break the law; they were built for efficiency. Yet efficiency has always been where privacy erodes first. Once a system records and learns without permission, the old rules of notice and consent stop working, and the legal frameworks meant to protect them start to crumble.

The allegations against Otter expose the friction between technological design and legal design. Engineers prioritize usability; the law prioritizes accountability.

Between those goals sits a void that AI has widened into a chasm. In the absence of federal clarity, companies have learned to operate in the gray, relying on end-user agreements that shift risk downward while the data flows upward.

A Clear Message

That model worked when violations meant spam emails or misplaced cookies. It will not survive when violations involve privileged communications, biometric data, or the training of self-learning systems that can’t unlearn what they’ve seen.

If courts side with Brewer, the ruling could redefine liability across the AI ecosystem. A decision finding that automatic transcription and data reuse constitute unlawful interception or unfair competition would send shockwaves through every industry using generative or analytical AI.

The question won’t be limited to notetakers—it will apply to any platform that records, summarizes, or optimizes human input. From HR analytics to client intake, the expectation that AI can quietly observe will no longer hold.

Even if Otter prevails, the message is clear: privacy law can no longer afford to lag behind innovation. Each new case is functioning as a substitute for national policy, establishing precedent one complaint at a time. And each ruling, in turn, forces lawyers to interpret 20th-century statutes through 21st-century conduct.

Featured Articles