AI and Attorney-Client Privilege: What Heppner Actually Says (and What It Doesnt)

By: Christina L. Geraci, Esq

If you've been anywhere near legal Twitter or LinkedIn in the last two months, you've

probably seen the headlines: "Federal Judge Rules AI Use Waives Attorney-Client

Privilege."

As an attorney who uses AI tools in my practice daily, I read that headline and thought

the same thing many of you probably did — that can't possibly be right. And after

digging into the actual opinion, I can tell you: it isn't. The ruling is real and it matters, but

it is far narrower than the headlines suggest.

Here is what United States v. Heppner actually holds, why it should not make you afraid

to use secure legal AI, and what every attorney and business owner should be doing

right now.

The Case

On February 17, 2026, Judge Jed Rakoff of the Southern District of New York issued an

opinion in United States v. Heppner, a securities fraud case. The defendant, Bradley

Heppner, had been using Anthropic's consumer-grade Claude platform — the same

publicly available chatbot anyone can sign up for — to generate legal analyses before

his arrest. He prepared these documents on his own, without direction from his

attorneys, and later shared the outputs with counsel.

When the government seized his devices and found these AI-generated documents,

Heppner's defense team claimed attorney-client privilege and work product protection.

Judge Rakoff said no.

Why the Court Ruled the Way It Did

The court's reasoning came down to three points, each rooted in decades of settled

privilege law:

First, the AI is not a lawyer. Attorney-client privilege requires a communication between

a client and an attorney. Claude is not licensed to practice law, cannot form an attorney-

client relationship, and owes no fiduciary duties. That alone was dispositive.

Second, there was no expectation of confidentiality. The consumer Claude platform's

privacy policy permits data collection, model training, and disclosure to third parties

under certain circumstances. The moment Heppner put privileged information into that

tool, the court reasoned, he effectively disclosed it to a third party — the same way he

would have if he'd emailed it to a stranger.

Third, he was not obtaining legal advice. Heppner used Claude on his own initiative, not

at the direction of his counsel. When the government asked Claude directly whether it

provides legal advice, Claude responded that it does not and is not a lawyer.

On the work product question, the court similarly found that because Heppner acted

independently rather than as his attorney's agent, the protection never attached in the

first place.

Why the Headlines Got It Wrong

Read the opinion carefully, and you'll see that Judge Rakoff did not hold that "using AI

waives privilege." He held something much more specific: a non-lawyer's self-directed

use of a consumer AI platform that trains on user data does not satisfy the foundational

requirements for privilege.

Those are two very different holdings.

Every major law firm that has analyzed the case — Gibson Dunn, Perkins Coie,

BakerHostetler, Duane Morris — has emphasized this narrow reading. The facts that

drove the outcome were the consumer-grade tool, the lack of counsel direction, and the

platform's terms of service permitting data use. Change any of those facts and the

analysis could come out differently.

In fact, Judge Rakoff himself acknowledged this. He noted that if counsel had directed

Heppner to use the platform, Claude might have qualified as a lawyer's agent under the

*Kovel* doctrine — the same framework that has extended privilege to accountants,

investigators, and translators assisting attorneys for over sixty years.

What This Means for Enterprise Legal AI

I use LexisNexis Protégé in my practice. Many of you do too. And the first question I

asked myself after reading Heppner was: does this ruling threaten the privilege of work

done inside Protégé?

The answer, based on conventional privilege analysis, is almost certainly no — for

several reasons:

Enterprise legal AI platforms like Protégé operate under contractual confidentiality terms

that consumer tools don't have. LexisNexis commits in writing that customer data is

never used to train models, is encrypted at rest and in transit, and stays within a closed-

loop environment. That is a fundamentally different factual posture from the consumer

Claude platform at issue in Heppner.

The tool is used by the attorney, in the course of representation, as an aid to legal work.

That tracks the Kovel framework far more closely than a client's solo experimentation

does.

None of this means privilege is automatic or guaranteed. The case law on enterprise AI

is still developing, and no federal court has yet squarely ruled on whether work done

inside a tool like Protégé qualifies for privilege. But the legal principles that doomed

Heppner's claim point in the opposite direction when applied to enterprise tools.

What Every Attorney and Business Owner Should Do Right Now

Regardless of how the case law develops, Heppner is a wake-up call. Here is what I am

doing in my practice, and what I'd suggest to colleagues and clients:

Stop using consumer AI tools for privileged work. The free and individual-paid tiers

of Claude, ChatGPT, Gemini, and Copilot are not appropriate containers for client

confidences. This is the clearest lesson of Heppner, and it applies regardless of how the

appellate courts eventually rule.

Know what you're paying for. Enterprise and business-tier AI products typically come

with contractual terms prohibiting training on your data, requiring encryption, and limiting

retention. Read those terms before you rely on them. Marketing language is not a

contract.

Document counsel direction. If you or your staff are using AI tools on client matters,

create a record that the use is directed by counsel in the scope of representation. This

strengthens any future Kovel-style argument and reflects the kind of professional

oversight privilege doctrine has always required.

Update your engagement letters. Many firms, including mine, are adding language

that discloses the use of AI tools in legal work and obtains client consent. This is both

good ethics practice and good risk management.

Advise your clients. The most dangerous scenario Heppner highlights is not an attorney

carefully using enterprise AI. It is a client pasting privileged strategy discussions into

ChatGPT on their own. If you represent businesses, your clients need to hear this

warning from you, in writing, before it matters.

The Bigger Picture

Generative AI is not going away, and attorneys who refuse to engage with it are going to

fall behind the profession. But the technology is moving faster than the law that governs it

, and every one of us practicing today has a responsibility to think carefully about how

privilege, confidentiality, and professional responsibility translate into a world where a

machine can draft a motion.

Heppner is not the last word on AI and privilege. It is the first chapter of a conversation

that will play out over the next decade. What we do now — the policies we set, the tools

we choose, the discipline we bring to client data — will shape how that conversation

ends.

The headlines got this one wrong. But the underlying question the case raises is exactly

right: if you are going to use AI in legal work, you need to understand exactly what you

are using, what it does with your data, and whether your client's most sensitive

information is actually as protected as you assume.

Because the privilege belongs to your client. And if you get it wrong, you're the one who

has to explain why.

---

Disclaimer - This post is for informational purposes only and does not constitute legal

advice. Practitioners should consult the underlying opinion in* United States v. Heppner,

25-cr-00503 (S.D.N.Y. Feb. 17, 2026), and their state bar's most recent AI ethics

guidance before making decisions about AI use in their own practices.