A US judge has determined that the fraud defendant's conversations with Claude, an AI, do not have privilege.
In a ruling from February considered the first of its kind in the United States, Judge Jed Rakoff determined that Bradley Heppner's discussions with Anthropic’s AI, Claude, regarding his legal exposure eliminated both attorney-client privilege and work-product protection, as an AI cannot serve as a lawyer and public AI platforms lack confidentiality obligations. Since then, over a dozen leading law firms have issued advisories to clients.
This groundbreaking decision by a federal court has generated a wave of legal warnings nationwide: using a publicly accessible AI chatbot to explore or discuss legal issues could result in those discussions being collected, disclosed to opposing legal parties, and used against you. The case that triggered these alerts is United States v. Heppner, where Judge Jed S. Rakoff of the Southern District of New York ruled in February 2026 that a criminal defendant's private discussions with Anthropic’s AI Claude were not covered by attorney-client privilege or the work-product doctrine.
The ruling, shared orally on February 10 and followed by a written opinion on February 17, is viewed by legal experts as a pioneering decision in the U.S. regarding the legal protection of AI chatbot interactions. Bradley Heppner, the defendant, was the former chairman of bankrupt financial services company GWG Holdings and the founder of the alternative asset firm Beneficent. He faced charges from federal prosecutors in November 2025 for securities and wire fraud, to which he pleaded not guilty. Prior to formally consulting with legal counsel and after receiving a grand jury subpoena, Heppner utilized Claude to evaluate his legal exposure, strategize on potential defenses, and formulate legal arguments independently from any guidance from his attorneys.
When the FBI conducted a search of his residence, they confiscated around 31 documents that detailed these AI conversations. The government requested their production, while Heppner claimed they were privileged. Rakoff dismissed this claim for three reasons. First, attorney-client privilege is intended to protect communications between a client and an attorney. Since Claude is not a licensed attorney and holds no duty of loyalty, Heppner could not establish a privileged relationship with it. As Rakoff stated, Heppner had essentially "disclosed it to a third party, namely, AI, which held no obligation of confidentiality."
Secondly, there was no reasonable expectation of confidentiality: the judge reviewed Anthropic’s terms of service and privacy policy, which clearly allow for data collection, usage of inputs and outputs for model training, and sharing with third parties, including government regulatory bodies. By accepting these terms, Heppner consented to a disclosure framework that conflicted with privilege. Thirdly, the work-product protection did not apply because Heppner was not consulting Claude under his lawyers' instructions, and the documents in question did not reflect his attorneys' strategies at the time of their creation.
On the same day as Rakoff's ruling, a federal magistrate judge in Michigan appeared to reach a contrary decision. In Warner v. Gilbarco, Inc., Magistrate Judge Anthony Patti held that a pro se plaintiff's conversations with ChatGPT concerning her employment discrimination case were protected as work product, contending that AI tools are “tools, not persons” and that a waiver of work-product protection necessitates disclosure to an adversary, not simply to a software platform. A similar conclusion was reached in another case, Morgan v. V2X (D. Colo., March 2026), for a different self-represented litigant. Legal analysts point out that these cases differ factually from Heppner's situation: the plaintiffs in Warner and Morgan were self-represented and governed by a civil procedure rule protective of work product, while Heppner was represented in a criminal matter and acted without attorney guidance. The courts themselves noted they were not establishing broad rules applicable to all situations.
The practical ramifications have been prompt. Reuters reported that over a dozen prominent U.S. law firms have released client advisories cautioning against utilizing public AI platforms for legal matters. New York firm Sher Tremonte has taken further steps by adding contractual language to client engagement agreements, indicating that sharing a lawyer’s advice or communications with a chatbot could nullify attorney-client privilege. The consensus from firms like Orrick, Crowell & Moring, and Fisher Phillips is clear: treat public AI platforms as fundamentally non-confidential; assume that anything entered could be disclosed. Only use private, secure AI systems whose terms of service do not allow for training on inputs or sharing with third parties, and always seek explicit attorney direction before engaging any AI system related to legal issues.
Other articles
A US judge has determined that the fraud defendant's conversations with Claude, an AI, do not have privilege.
A US court determined that conversations with AI chatbots are not protected by legal privilege. The case centered around Claude. Clients should consider public AI chats as potentially discoverable in legal proceedings.
