In a First, Court Finds Using AI Tools Ends Attorney-Client Privilege

Take heed, in-house counsel. even if a document was intended to be legal in nature, using consumer AI converts it into discoverable material.


Brendan Palfreyman
Best Lawyers logo

Brendan Palfreyman

February 17, 2026 03:04 PM

On Feb. 10, 2026, Judge Jed S. Rakoff of the Southern District of New York delivered a ruling from the bench in United States v. Heppner that dismantled a central pillar of the defendant’s legal shield. The court held that the use of a consumer-grade artificial intelligence tool — Anthropic’s Claude — waived both attorney-client privilege and work-product protection. This decision is the first of which I’m aware holding that inputting potentially privileged information into a free generative AI tool (like Claude or OpenAI) resulted in the loss of that privilege.

For in-house counsel, this case is a cautionary tale and serves as the definitive sequel to my previous alert about employees using ChatGPT and creating discoverable ESI. While that earlier discussion focused on the existence of discoverable AI data, Heppner takes the risk to its ultimate conclusion: even if a document was intended to be legal in nature, using consumer AI converts it into discoverable material.

The Prosecution’s Narrative: A High-Stakes Financial Scheme

To appreciate the gravity of Judge Rakoff’s decision, one must first understand the high-stakes environment in which Bradley Heppner, the former CEO and Board Chairman of GWG Holdings, Inc., found himself. The government’s case against Heppner centers on an alleged scheme to misappropriate more than $150 million through Highland Consolidated Limited Partnership (HCLP), a shell company he controlled.

The seeds of the privilege waiver were sown long before Heppner’s eventual arrest. In December 2024, Heppner and his defense team entered into a Statute of Limitations Tolling Agreement with the U.S. Attorney’s Office for the Southern District of New York. This agreement was intended to provide Heppner a window to present information to the government and engage in discussions that might forestall a grand jury indictment. It was during this period of intense pressure that Heppner turned to Anthropic’s Claude to help synthesize a defense strategy and draft responses to the mounting evidence against him.

The Discovery of the AI-Generated Analysis

The government’s discovery of this activity stemmed from a standard discovery challenge. During the pretrial phase, the defense produced a privilege log to the government. One entry served as a red flag for the prosecution: a description of documents as “artificial intelligence-generated analysis conveying facts to counsel for the purpose of obtaining legal advice.” A snip of that privilege log is depicted below:

The government moved for a ruling that these documents were not privileged, arguing that the moment Heppner input confidential case facts and defense theories into Claude, he disclosed those secrets to a third party. This disclosure, the government contended, was a voluntary act that forfeited the protections of attorney-client privilege.

Why the Privilege Was Waived: The Third-Party Trap

In ruling for the government on Feb. 10, 2026, Judge Rakoff focused on the fundamental nature of the relationship between a user and a consumer AI provider. Under long-standing legal principles, the attorney-client privilege only protects communications made in confidence between a client and their lawyer. By inputting privileged information into a third-party platform like Claude, Heppner effectively “published” his secrets to an outside entity.

The Court’s logic centered on the lack of a “reasonable expectation of privacy.” Anthropic’s Terms of Service, much like those of its competitors, explicitly state that the company reserves the right to:

  • Review prompts for safety and training purposes.

  • Retain data on its servers.

  • Disclose that data to government authorities in response to legal processes.

Thus, the argument could be made that sharing legal strategy with an AI tool whose terms of service permit third-party review is no different than discussing that strategy in a crowded public elevator.

The Failure of the Work-Product Doctrine

Work-product protection typically shields documents prepared in anticipation of litigation, but it generally requires the work be performed by, or at the direction of, an attorney.

In Heppner, the defendant appeared to have used the tool on his own initiative, seeking to draft defense strategies without the direct supervision of his lawyers. Because the documents reflected Heppner’s own independent research and the AI’s generated suggestions, rather than the specific legal strategy of his counsel, they did not qualify as protected work product. The government successfully argued the doctrine does not protect a layperson’s independent internet research.

AI-Usage Guardrails and the “Shadow AI” Risk

For in-house counsel, the mandate is no longer just about policy, but about institutional discipline. The Heppner case highlights the dangers of “Shadow AI” – the unauthorized use of AI tools by employees – which is currently happening at most companies without official oversight, as discussed in my previous alert.

Internal AI policies should explicitly preclude employees from using consumer-grade AI and personal AI accounts for work purposes. This usage risks not only privilege waiver but also significant security and data privacy issues. A boilerplate policy may be insufficient; true risk management requires:

  • Technical Blocks: Working with IT to block unapproved consumer AI URLs on company devices.

  • Provisioned Access: Providing employees with enterprise-grade tools that have contractually guaranteed privacy settings.

If a tool is accessible, it will be used; if it is used for legal work, the privilege is at risk.

Establishing a “Lawyer-in-the-Loop”

To preserve work-product protection, legal departments must establish a “Lawyer-in-the-Loop” requirement. Work-product protection failed in Heppner because the client acted alone. The analogy in the legal world is a non-attorney employee using ChatGPT or Claude on his or her own to research a question or input facts prior to reaching out to in-house counsel. By Judge Rakoff’s ruling, which comports with well established precedent and my previous alert, the inputs and outputs resulting from these employee-directed AI sessions would be discoverable.

In-house counsel must mandate that any use of AI for legal-adjacent work – such as summarizing meeting notes from a strategy session or analyzing a contract – must be supervised by the legal department. If an executive wants to use AI to think through a problem, they should do so only under the aegis of in-house counsel so that, at a minimum, there is a colorable argument for work product privilege.

The New Standard for Discovery Requests

The government’s success in this motion will likely embolden litigators to routinely request “AI prompts and outputs” in discovery. They will look for any indication that a client or employee used a chatbot to “think through” a legal problem. Adversaries will now scour privilege logs for any mention of AI assistance, viewing it as a potential back door into otherwise protected communications and thought processes.

Conclusion

Judge Rakoff’s ruling in United States v. Heppner is an unsurprising, but clarifying, moment for the legal profession. It strips away the novelty of AI and applies the foundational principles of evidence: disclosure to a third party kills attorney-client privilege. Now that this issue has finally been addressed by a court, in-house counsel should move quickly to put procedures in place to make sure this situation does not happen to their companies.

Our Artificial Intelligence Industry Team tracks AI issues and court decisions throughout the country. If you have questions or concerns about AI-related matters, please reach out to attorney Brendan M. Palfreyman at (315) 214-2161 and bpalfreyman@harrisbeachmurtha.com, or the Harris Beach Murtha attorney with whom you most frequently work.

This alert is not a substitute for advice of counsel on specific legal issues.

Harris Beach Murtha’s lawyers and consultants practice from offices throughout Connecticut in Bantam, Hartford, New Haven and Stamford; New York State in Albany, Binghamton, Buffalo, Ithaca, New York City, Niagara Falls, Rochester, Saratoga Springs, Syracuse, Long Island and White Plains; as well as in Boston, Massachusetts, and Newark, New Jersey.

Featured Articles