A Court Just Confirmed What Employers Need to Hear: Your AI Conversations Are Not Privileged
Connecticut Employment Law Blog | Blog
February 16, 2026
If you’ve been following this blog, you know I’ve been writing about the intersection of generative AI and employment litigation for a while now. I’ve talked about updating litigation hold policies to account for GenAI data, and I’ve urged employers to start requesting plaintiffs’ AI conversation histories in discovery.
Well, a ruling this past week from the Southern District of New York just added an exclamation point to all of it.
On February 10, 2026, Judge Jed S. Rakoff ruled in United States v. Heppner that documents a defendant created using Anthropic’s Claude AI tool and later shared with his attorneys are not protected by attorney-client privilege or the work product doctrine. The defendant, a financial services executive accused of fraud, had used Claude to prepare approximately thirty-one documents related to his legal situation and then sent them to his defense counsel at Quinn Emanuel. When the government seized his devices and found the AI-generated materials, the defense tried to shield them as privileged.
Judge Rakoff wasn’t having it. “I’m not seeing remotely any basis for any claim of attorney-client privilege,” he said from the bench.
The reasoning is straightforward and grounded in well-established privilege principles: An AI tool is not an attorney. It holds no law license and owes no professional duties. The tool’s own terms of service disclaim any attorney-client relationship and state that user inputs are not confidential. And sending pre-existing, unprivileged documents to your lawyer after the fact doesn’t retroactively cloak them with privilege.
So what does this mean for employers? Two things.
First, exercise extreme caution when using AI tools in connection with any legal matter. If your HR team is using ChatGPT to analyze a termination decision, or a manager is asking Claude about potential liability for a workplace complaint, those conversations are almost certainly not privileged. They are discoverable. And they could end up as exhibits in a lawsuit against your company.
The only way to potentially maintain privilege over AI-generated analysis is to have it created by or at the specific direction of legal counsel, using enterprise AI tools with stronger confidentiality protections. Even then, caution is warranted. The safer practice is to let your attorneys handle the legal analysis and keep sensitive legal matters out of commercial AI platforms entirely.
Second, if your company is facing an employment claim, you should be proactively requesting the plaintiff’s generative AI materials in discovery. As I discussed in a prior post, employees are increasingly using AI tools to research their legal rights, draft complaints, and strategize about claims. Those conversations are discoverable, and Judge Rakoff’s ruling makes clear they are not privileged.
The prompts an employee feeds into ChatGPT or Claude often reveal more than the polished complaint that lands on your desk.
Think about it: an employee’s AI conversation history might show they were already planning to leave before the alleged adverse action, that they acknowledged performance issues were legitimate, or that they were shopping for legal theories to maximize a settlement. That is powerful impeachment material, and you should be asking for it.
The Heppner ruling is an oral decision with a written opinion expected to follow, but the reasoning is sound and consistent with longstanding privilege law. Employers and their counsel should take note now.
The bottom line: be careful what you put into AI, and be aggressive about finding out what the other side put into it. Generative AI is a powerful tool, but as this ruling confirms, it comes with no expectation of privacy and no privilege protection.
