Some U.S. lawyers are warning clients to avoid treating artificial intelligence chatbots as confidants when their freedom or legal liability is on the line. These warnings have become especially urgent following a ruling by a federal judge that ruled that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities fraud charges against him.
Attorneys have been saying that conversations with chatbots like Anthropic’s Claude and OpenAI’s ChatGPT could be demanded by prosecutors and litigation opponents in cases. “We are telling our clients: You should proceed with caution here,” said Alexandria Gutiérrez Swette, a lawyer at New York-based law firm Kobre & Kim.
While clients’ conversations with their lawyers are almost always considered confidential under U.S. law, chatbots are not lawyers, and attorneys are telling clients to take steps that could keep their communications with AI tools more private.
READ: WhatsApp launches update with ‘Writing Help’ to draft AI-generated replies (March 26, 2026)
More than a dozen major law firms in the U.S. have shared such advice via emails to clients and advisories posted on their websites. Similar warnings have also appeared in hiring agreements. New York-based firm Sher Tremonte for instance, stated in a recent client contract that sharing a lawyer’s advice or communications with a chatbot could erase the legal protection known as attorney-client privilege that usually shields communications between lawyers and their clients.
This development comes following a ruling involving Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent . Heppner was charged by federal prosecutors last November with securities and wire fraud, and pleaded not guilty. Heppner had used Anthropic’s Claude to prepare notes about the case, and he argued that his AI exchanges should be withheld because they contained details from the lawyers related to his defense.
Prosecutors argued that they had a right to demand material that Heppner created with Claude because his defense lawyers were not directly involved, and because attorney-client privilege does not apply to chatbots. Information revealed to a third party voluntarily can potentially jeopardize customary confidentiality for attorney-client communications.
READ: Perplexity AI chatbot gets integrated into WhatsApp (April 29, 2025)
Manhattan-based U.S. District Judge Jed Rakoff ruled in February that Heppner must hand over 31 documents generated by Anthropic’s chatbot Claude related to the case. No attorney-client relationship exists “or could exist, between an AI user and a platform such as Claude,” Rakoff wrote.
Advice from lawyers has ranged from telling clients to select their AI platforms carefully to suggesting specific language to use in chatbot prompts. Los Angeles-based O’Melveny & Myers and other firms said in client advisories that “closed” AI systems designed for corporate use could provide stronger protections for legal communications, though they said even that remains largely untested. Some lawyers have claimed AI legal research is more likely to be protected by attorney-client privilege when it is conducted at the direction of a lawyer.
New York-headquartered law firm Debevoise & Plimpton said in a notice on its website that if a lawyer advises the use of AI, the client should say so in the chatbot prompt.

