Court Rejects Attorney-Client Privilege for AI-Generated Documents

What firms need to know now.

AI Docs and Attorney-Client Privilege: What Firms Must Know headline
Image by iStock/Peshkova

David L. Brown

February 27, 2026 05:00 AM

Law firms may be adopting artificial intelligence tools at a rapid clip, but as two recent court rulings show, integrating AI into sensitive legal work is continuing to generate legal risk for lawyers and clients alike.

In one of the decisions, a judge on the influential U.S. District Court for the Southern District of New York ruled that a criminal defendant could not shield a set of AI-generated documents from prosecutors even though the material had been created to assist his legal defense.

The other ruling, from the U.S. Court of Appeals for the Fifth Circuit, lashed a lawyer for misusing AI and expressed significant doubts about the future of generative AI as an effective method for drafting court-ready documents.

A Question of First Impression

On Feb. 10, U.S. District Judge Jed Rakoff in the Southern District of New York ruled from the bench that neither attorney-client privilege nor the work product doctrine applied to AI-generated material that a criminal defendant produced and used to help prepare his defense.

“The court’s ruling in this case appears to answer a question of first impression nationwide: whether, when a user communicates with a publicly available AI platform in connection with a pending criminal investigation, are the AI user’s communications protected by attorney-client privilege or the work product doctrine?” Rakoff wrote in a Feb. 17 memorandum memorializing his ruling. “For the reasons that follow, the answer is no.”

The ruling stems from a case against Bradley Heppner, the former CEO and board chairman at the publicly traded GWG Holdings, a Dallas-based financial services firm. Heppner, who was arrested at his Texas home in November 2025, is accused of defrauding investors of more than $150 million.

According to court documents, as part of the FBI’s search of the home, agents seized 31 documents showing Heppner’s communications with Claude, a generative AI platform. Heppner, who had previously received a grand jury subpoena, had asked Claude to outline a defense strategy in anticipation of a potential indictment.

Claiming Privilege

Prosecutors filed a motion for a ruling that Heppner’s exchanges with AI were not protected from their inspection by either attorney-client privilege or the work product doctrine.

Heppner argued that the documents were privileged because they: included information that he had learned from counsel; were created to help him speak with counsel to obtain legal advice; and had been shared with counsel.

Rakoff was not convinced. “The AI documents lack at least two, if not all three, elements of the attorney-client privilege. First, the AI documents are not communications between Heppner and his counsel,” he said. “Because Claude is not an attorney…that alone disposes of Heppner’s claim of privilege.”

The judge also noted that the documents were not confidential because Claude’s written privacy policy provides that it can collect data on user inputs and AI outputs. Anthropic, the company that owns Claude, “uses such data to ‘train’ Claude” and “reserves the right to disclose such data to a host of ‘third parties,’” Rakoff said, quoting the privacy policy.

‘I’m Not a Lawyer’

Rakoff also concluded that Heppner “did not communicate with Claude for the purpose of obtaining legal advice” even though he was using the platform to prepare for communications with his lawyers.

While the judge acknowledged this was a “closer call,” Rakoff said Heppner acted without the “suggestion or direction” of counsel. “Had counsel directed Heppner to use Claude, Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege,” Rakoff said.

He added that Claude itself, when queried by prosecutors about whether it could provide legal advice, responded “I’m not a lawyer and can’t provide formal legal advice or recommendations.”

The fact that the documents were not prepared by or at the behest of counsel also helped to undermine arguments that the AI-generated material merited protection under the attorney work-product doctrine. So, too, did their seizure during the search of Heppner’s home. “The government did not request them, and Heppner did not produce them in pretrial discovery,” the judge said.

“AI’s novelty does not mean that its use is not subject to longstanding legal principles, such as those governing the attorney-client privilege and the work-product doctrine,” Rakoff said. “Because Heppner’s use of Claude fails to satisfy either of these rules, the AI documents do not merit the protection Heppner has claimed.”

Additional Risk

Rakoff’s ruling has triggered a flurry of commentary from practitioners, with many warning corporate clients to pay close attention to the potential legal risks they face in using publicly available AI platforms.

In an article published on Feb. 20, lawyers from Arnold & Porter wrote that although “artificial intelligence (AI) has moved from novelty to necessity in record time...assuming AI is the same as any other productivity tool in the context of legal advice and counsel now carries additional risk.” Rakoff’s decision “as a question of first impression nationwide…carries significant implications for how companies, individuals, and their attorneys use AI.”

Lawyers from BakerHostetler, in a Feb. 17 post, called the decision “an important development” that “highlights the risk of using public-facing AI tools in legal proceedings.” This is particularly true “when such tools are used outside the direction of counsel,” they said. “Rakoff’s ruling highlights that the independent use of public-facing generative AI in legal matters may forfeit the protections afforded under the attorney-client privilege and the work-product doctrine, leaving such materials vulnerable to disclosure.”

Gibson Dunn’s AI and white-collar defense practices noted a potential silver lining for lawyers and clients in Rakoff’s ruling. The judge applied longstanding attorney-client privilege and work product principles rather than carving out new rules designed to target AI technology.

“Judge Rakoff’s ruling does not represent a categorical rejection of privilege and work product in the AI context,” Gibson’s lawyers said in a Feb. 20 client alert. “The central takeaway from Judge Rakoff’s ruling is not that AI adoption is incompatible with privilege and work product protections, but that unexamined use of AI tools can create avoidable legal risk.”

The Problem Is ‘Getting Worse’

On the same day Rakoff issued his memorandum, the Fifth Circuit delivered its own less-than-optimistic take on lawyers and their use of AI tools. In a ruling authored by Chief Judge Jennifer Walker Elrod, the court sanctioned an Arkansas lawyer for submitting an AI-generated brief that contained 21 fictional case citations and other errors.

Elrod’s opinion lacerated the lawyer, Heather Hersh of FCRA attorneys, for failing to adequately review the brief for accuracy and for not being “forthcoming in her response” when asked by the court about the errors. The court ordered Hersh to pay a $2,500 sanction.

The opinion further bemoaned the continuing troubles attorneys are having with AI hallucinations in their filings, saying the issue “has become central to the ongoing discussions of the relationship between law and technology.” Elrod cited numbers compiled by Damien Charlotin, a French lawyer and data scientist, which show that in more than 260 cases, U.S. courts have found AI hallucinations in filings by lawyers, judges, and paralegals since April 2023.

“Regrettably, despite numerous news stories, CLE presentations, scholarly articles, and judicial entreaties, AI-hallucinated case citations have increasingly become an even greater problem in our courts, and the problem shows no sign of abating,” the court said, adding later in the opinion that “it is a problem that is getting worse—not better.”

The Lawyer’s Responsibility

While lawyers may blame AI tools for making such mistakes, Elrod’s opinion placed the responsibility for errors in briefs squarely on attorneys’ shoulders. “If it were ever an excuse to plead ignorance of the risks of using generative AI to draft a brief without verifying its output, it is certainly no longer so,” the court said.

“To ethically use generative AI in the practice of law—which we do not dispute can be helpful if done properly and carefully—a lawyer must ‘ensure that the legal propositions and authority generated are trustworthy,’” Elrod wrote, quoting a recent AI-related opinion from the Southern District of Florida. “Failure to do so ‘abdicates one’s duty, wastes legal resources, and lowers the public’s respect for the legal profession and judicial proceedings.’”

Elrod also expressed pessimism that AI-related hallucination problems will fade over time. “The hallucination problem has no end in sight, as AI’s tendency to fabricate results arises from the training and structures of AI programs,” Elrod wrote. Citing recent research, Elrod said that hallucinations are becoming harder to spot. Hallucinations “now often manifest as false quotes or statements of law attributed to real cases, rather than the more easily recognizable fake cases,” she said.

In 2024, the Fifth Circuit considered a rule that would have required counsel and pro se litigants to certify that no generative AI program was used to prepare any submitted document or if an AI program was used, that a human checked the AI-generated text for accuracy. Elrod noted that the court declined to adopt the proposed rule because existing rules already impose an obligation on counsel to submit accurate information to courts.

“Modern generative AI may be a new technology, but the same sanctions rules apply, and the rules we have are well equipped to handle these types of cases,” Elrod said.

Preserving Confidentiality

Law firms hoping to avoid sanctions for hallucinatory flubs and to avoid losing attorney-client privilege when using AI tools should consider putting a few guardrails in place. Among them:

• Assess AI tools for potential privilege problems. AI, like any other technology tool, should be assessed to determine whether issues like data retention and sharing, training, and third-party access could create discoverable material and weaken privilege claims.

• Avoid public tools. For sensitive legal matters, using consumer-oriented AI platforms increases the odds of hallucinations and pierced privilege. Enterprise or bespoke AI tools may reduce risk by providing clear contractual terms around confidentiality, limits on third-party access, and a model trained on firm-specific data.

• Create a formal AI policy. Last year, the International Legal Technology Association found that half of law firms do not have an official generative AI policy. With a policy in place, firms can clarify which tools they permit or sanction and which they do not.

• Set limits. Along with a formal AI policy, firms should set clear limits on what should and should not be added to certain AI tools. Lawyers and staff should know when and where they can input client facts, legal advice, strategy, investigation materials, and other sensitive content.

• Speak to clients. Clients should be aware that certain AI communications may be revealed in court. In a sensitive legal matter, the client should be able to demonstrate that AI-assisted work was done under a lawyer’s direction and in a confidential context. This may help support attorney-client privilege if it is challenged.

---

David L. Brown is a legal affairs writer and consultant, who has served as head of editorial at ALM Media, editor-in-chief of The National Law Journal and Legal Times, and executive editor of The American Lawyer. He consults on thought leadership strategy and creates in-depth content for legal industry clients and works closely with Best Law Firms as senior content consultant.