A federal judge just ruled your ChatGPT conversations aren't privileged. If you're a lawyer paying $20/month, your settings are wrong.
A 60-second fix, a federal preservation order most attorneys don't know about, and the intake bot on your homepage that may be a malpractice trap.
If you practice law in any jurisdiction in the United States, this is the most important newsletter I’ll write this year. Read it. Forward it to every attorney you know. And if you’re paying $20 a month for ChatGPT Plus or Claude Pro, stop reading and scroll down to the 60-second fix right now. Your settings are almost certainly wrong.
I’ll wait.
Word of the Day: Digital Confession
A digital confession is anything you type into an AI tool that you didn’t intend for anyone else to read, but that could end up in front of a judge, a regulator, or an opposing attorney anyway.
Picture every conversation you’ve ever had with ChatGPT or Claude printed out and placed in a single binder. Now imagine that binder sitting on opposing counsel’s desk in your next deposition, in a regulator’s enforcement file, or in a prosecutor’s discovery production.
That binder exists. It’s just not on your desk.
What just happened in court
In February, Judge Jed Rakoff of the Southern District of New York ruled in United States v. Heppner that conversations with consumer AI tools are not protected by attorney-client privilege.
Bradley Heppner, a former CEO facing federal fraud charges, had used Claude to prepare 31 documents related to his legal defense. Defense strategy. Anticipated charges. What he might argue. He fed information his lawyers at Quinn Emanuel had told him directly into the chatbot. He later shared those documents with his counsel.
Judge Rakoff held that none of it was privileged. None of it was work product. The reasoning:
Claude is not an attorney. There’s no professional relationship, no fiduciary duty, no bar discipline.
Anthropic’s privacy policy reserves the right to use inputs for training and to disclose data to regulators. That alone defeats any reasonable expectation of confidentiality.
Heppner used Claude on his own initiative, not at the direction of counsel. So no work product protection either.
Sharing the AI documents with his lawyer afterward did not retroactively make them privileged.
Now the kicker that should keep every defense attorney up at night. Several of the major firms analyzing the ruling have flagged that by feeding what his lawyers told him into the AI, Heppner may have waived privilege over the underlying attorney-client communications themselves. He didn’t just lose protection on the AI conversation. He may have torched protection on the original legal advice that started everything.
That’s not a hypothetical for criminal defendants. That’s a hypothetical for any client who ever pastes “here’s what my lawyer said” into ChatGPT.
Your clients are doing this right now
Let me say what every attorney reading this already suspects.
Your clients are talking to AI about their cases. They’re typing injury details, accident facts, what they told their doctor, what they’re worried about. They’re asking ChatGPT to help them respond to demand letters. They’re pasting your engagement letter into Claude and asking what it means. They’re doing it before they call you. They’re doing it the night before depositions. They’re doing it because the interface feels calm and private and they don’t know any better.
They should know better. And as their lawyer, you’re now the one who has to tell them.
Texas attorney Virginia Hammerle put it bluntly to CNN: “In my firm, we’re treating it as: Anything that somebody’s typing into ChatGPT is something that could be discoverable.”
Or as Nils Gilman, a senior adviser at the Berggruen Institute, said in the same article: “ChatGPT is not your friend, is not your lawyer, is not your doctor, is not your spouse. Stop talking to them as if they are.”
The wrinkle nobody is talking about
Heppner is the headline. But there’s a second case that may matter even more for the average user.
In May 2025, a federal magistrate judge in The New York Times v. OpenAI ordered OpenAI to preserve every single ChatGPT conversation indefinitely, including ones users had already deleted. In November, the same judge ordered OpenAI to produce 20 million de-identified ChatGPT logs to the news plaintiffs.
This applies to ChatGPT Free, Plus, Pro, and Business (formerly Team). Even chats you thought were gone. Even Temporary Chats. Only ChatGPT Enterprise, Edu, and qualifying API customers with Zero Data Retention agreements were exempt.
So even if you opted out of training on your Plus account, OpenAI is currently sitting on a comprehensive archive of your conversations because a federal court told them they have to. That archive can be subpoenaed. That archive can be ordered produced.
This is not theoretical. It is happening right now.
Right now, today: the 60-second fix for your personal AI account
If you’re on a personal AI subscription, here’s exactly what to do in the next minute. This doesn’t fix everything (the NYT preservation order still applies to ChatGPT consumer plans), but it stops your future conversations from being used to train the next model. These instructions apply to ChatGPT Free, Go, Plus, and Pro, plus Claude Free, Pro, and Max. The toggle is in the same place on every consumer plan.
ChatGPT Free, Go, Plus, or Pro:
Go to
https://chatgpt.com
and sign in.
Click your profile icon (top right corner).
Click Settings.
Click Data Controls.
Find Improve the model for everyone. Toggle it OFF.
That’s it. Future chats will no longer be used to train OpenAI’s models. (Direct link to the help article: https://help.openai.com/en/articles/8983130)
Claude Free, Pro, or Max:
Find Help improve Claude. Toggle it OFF.
Important context for Claude users: in October 2025, Anthropic flipped the default. If you clicked through the “Accept” pop-up quickly, you opted IN to a 5-year retention window with your data being used for training. If you opted out, you stayed at the standard 30-day retention. Many people clicked “Accept” without realizing what they agreed to. Go check your setting today. (Reference: https://www.anthropic.com/news/updates-to-our-consumer-terms)
Google Gemini:
Click Turn off next to Gemini Apps Activity.
Confirm.
(Reference: https://support.google.com/gemini/answer/13594961)
For all three: deleting old chats does not pull them out of any model that’s already been trained. It only stops your future conversations from being used. So flip the switch now and don’t wait.
The practical answer for attorneys: what plan should you actually use?
This is where it matters whether you’re paying $20/month or running an enterprise contract. The differences are massive and most attorneys have no idea.
Consumer plans (ChatGPT Free, Go, Plus, Pro / Claude Free, Pro, Max). This is the Heppner danger zone. By default, your conversations train the model. Conversations sit on company servers. ChatGPT Plus and Pro are also currently subject to the NYT preservation order. If you are using these for any client matter, you have a problem.
Team plans. ChatGPT Business (the plan formerly known as Team) and Claude Team. Significantly better. No training on data by default. Proper data handling agreements. Still recommended for most small and mid-sized firms as a baseline.
Enterprise plans (ChatGPT Enterprise, ChatGPT Edu, Claude Enterprise). Strong protections. No training. Admin controls. Audit logs. Available BAA for healthcare. Not subject to the NYT preservation order. This is what AmLaw 200 firms are buying.
Enterprise API with Zero Data Retention. The gold standard for cloud AI. The conversation is processed and immediately discarded. There is nothing stored to subpoena.
Self-hosted open-source LLMs (Llama, Mixtral, etc. running on your own servers). This is the most private option that exists. The conversation never leaves your network. There is no third-party vendor in the picture. Privacy by design. The catch: it’s expensive, technically demanding, and most firms under 50 attorneys can’t realistically pull it off.
But even on the safer plans, two things commonly trip up firms that think ZDR is the finish line.
Two things ZDR does not do
ZDR is not privilege
Zero Data Retention is a vendor-side commitment that no data is stored at rest on the AI provider’s servers. That is a real and meaningful protection. It is not the same thing as attorney-client privilege.
Privilege has specific elements: a communication between a client and an attorney, intended to be confidential, for the purpose of obtaining or providing legal advice. ZDR does not satisfy any of those elements on its own. An AI is not an attorney. A prospective website visitor is not yet a client. A casual chatbot conversation is rarely framed as obtaining legal advice.
For AI work to qualify as privileged or as attorney work product, the attorney has to be in the picture. The attorney has to direct the use, supervise the output, and incorporate it into the representation. ZDR can support that posture. It cannot create it.
Anything that still exists can be subpoenaed
ZDR works because nothing exists at the vendor to hand over. But your own internal logs, audit trails, chat transcripts, call recordings, CRM entries, and backups can still be discoverable, even when stored on a private firm server. Privilege is not automatic. It has specific elements that have to be met, and AI intake conversations between a chatbot and a prospective client often fail them.
That’s why it’s extremely important to have a written policy along the lines of: AI intake is for initial screening only; substantive discussions occur only with attorneys; intake records are retained for 14 days for the purpose of conflicts and follow-up; here is our purge schedule. That policy itself becomes the firm’s defensive posture in any future discovery dispute. It establishes what was confidential, what was intended for legal advice, and what was just lead-capture metadata.
The policy alone is not enough. IT has to actually execute it. Automated purges on schedule, no rogue backups sitting in a forgotten S3 bucket, no associate’s local copy on a personal laptop. A written policy that the firm doesn’t follow is worse than no policy at all in front of a judge.
The firms that get this wrong will spend years in privilege fights they didn’t know they were going to have. The firms that get it right will have answers ready.
The law firm intake problem nobody is talking about
If you run a firm, here’s the bigger exposure most managing partners haven’t thought through.
You almost certainly have an AI somewhere on your website. A chatbot in the corner. An AI receptionist answering after-hours calls. An intake bot collecting case details before a paralegal ever sees the lead.
If that system was built on top of consumer ChatGPT, or any AI vendor whose terms of service let them use the data for training or share it with third parties, then every prospective client conversation is potentially discoverable evidence sitting on someone else’s server.
The caller describes their situation. They share sensitive details. The AI asks follow-ups. All of that gets stored by a vendor whose terms of service let them share it with regulators. Or, in the case of consumer ChatGPT right now, by a vendor under a court order to preserve every conversation indefinitely.
That’s not a chatbot. That’s a liability sitting on your homepage.
The fix isn’t complicated, but it has to be intentional. AI systems handling intake conversations need to be built specifically with Zero Data Retention from the start. The conversation ends when the session ends. Nothing stored. Nothing to subpoena.
This is exactly why we’re building AttorneyInbound.ai at DigitalTreehouse for law firms specifically. It runs on AWS Bedrock with Zero Data Retention. Anthropic never sees the conversations. AWS doesn’t use them for training. Nothing gets stored in service logs. Caller information stays inside the firm’s controlled environment, end to end.
We didn’t bolt this on at the end. It’s the foundation. Because in 2026, an AI receptionist that doesn’t have ZDR baked in is a malpractice risk waiting to happen.
Even Sam Altman is worried about this
OpenAI’s CEO said on the Theo Von podcast in July 2025 that he was “very afraid” the government would use AI chat logs to surveil people.
His exact words: “I think we really have to defend rights to privacy. I don’t think those are absolute. I’m totally willing to compromise some privacy for collective safety, but history is that the government takes that way too far, and I’m really nervous about that.”
That’s the CEO of ChatGPT saying he’s nervous about how ChatGPT conversations could be used. Take that however you want.
Some serious thinkers are pushing for new laws. Nils Gilman argued in a New York Times op-ed for legal privilege protections for AI conversations, similar to what exists for therapists and lawyers. The logic: if millions of people use AI the way they once used a therapist, the law should catch up.
The courts aren’t waiting for the law to catch up, and neither should you.
What you should actually do, in order
For yourself, today:
Flip the training opt-out toggle on every personal AI account you have. Steps above. Takes 60 seconds total.
Stop using consumer AI tools for active legal matters. Period.
If you’re going to keep using AI for legal work, upgrade your firm to an Enterprise or Team plan with proper data handling agreements.
For your firm:
Audit every AI tool deployed on your firm’s website, phones, or intake systems. Ask the vendor one question in writing: “Do you have Zero Data Retention?” If they don’t know what that means, replace the system.
Update your engagement letters with a clause warning clients about AI use and the privilege risk.
Tell every staff member: no client information in any consumer AI account, ever. Not personal ChatGPT, not personal Claude, not personal Gemini. Not even for “just a quick summary.”
If you’re ready to deploy AI safely (intake, scheduling, document review, client communication), do it on infrastructure designed for legal work from day one.
For your clients:
If they ask whether they should use ChatGPT or Claude to think through their case, the answer is no.
The bottom line
The rules just changed. Quietly. Without a press release.
Your AI conversations are now legally equivalent to a credit card swipe or a phone call record. They exist. They can be demanded. They can be used against you and against your clients.
The technology moved faster than the law. The courts aren’t waiting for the law to catch up, and neither should you.
If you took 60 seconds to flip those toggles, your time reading this newsletter just paid for itself for the rest of the year.
If you forwarded this to one other attorney, you might have just saved their license.
SmartOwner is published (almost) daily by the team at DigitalTreehouse. If you’re a law firm or professional services business and want to talk about building AI receptionists, intake bots, or workflows the right way with Zero Data Retention from day one, reply to this email and ask about AttorneyInbound.ai.


