The AI Use Case for Therapists Nobody Is Writing About Yet

The AI Use Case for Therapists Nobody Is Writing About Yet

Every AI article aimed at therapists right now has the same premise: save time on notes, reduce admin burnout, free up more hours for clients.

That's not wrong. It's just not the interesting part.

There's a different use case that almost nobody is writing about. Earlier this year, STAT News profiled a clinical psychologist who uses AI as a thinking partner for case consultation and reflection — not for documentation, not for scheduling, not for billing. For the actual clinical work. Working through a stuck case. Reviewing published material. Thinking out loud about a treatment formulation before bringing it to supervision.

She works only with de-identified information or published cases. The AI doesn't know who her clients are. But it can engage with clinical complexity in a way that's genuinely useful.

This isn't fringe behavior. More than half of psychologists reported using AI professionally in 2025. And yet: no state licensing board has issued specific guidance on this clinical consultation use case. Not APA. Not NASW. Not AAMFT. The professional associations have been focused on AI in direct client contact. The consultation use case — using AI the way you'd use a peer — largely hasn't been addressed.

That gap matters. Because it means therapists are either avoiding the tool entirely out of caution, or using it without any framework at all. Both are the wrong answer.

What the clinical consultation use case actually looks like

The distinction that matters here is between documentation and consultation.

Documentation involves identifying information — names, dates, session content, diagnostic impressions tied to a specific individual. That's where the ethics concerns around confidentiality concentrate. It's also why most current AI guidance in behavioral health is focused on note-writing tools.

Consultation is different. Good clinical consultation has always involved presenting cases in de-identified form — removing names, ages, identifying details, and any combination of circumstances that could identify someone — and then discussing the clinical picture. Peer consultation groups work this way. Supervision works this way. The same principle applies when AI is the thinking partner.

If you're presenting a case to an AI tool using de-identified information, you're operating in broadly the same ethical territory as a peer consultation call. The AI doesn't care who the client is. You're thinking through the clinical dynamics.

That's a use case worth taking seriously.

The guardrails that actually matter

The absence of formal guidance doesn't mean there's no framework. It means you have to build one yourself. Here's what a rigorous approach looks like.

De-identify before you type anything. Age, general presenting concern, and clinical themes are fine. Name, location, employer, relationship structure, or any combination of details that could identify the client — remove them. Apply the same standard you'd use presenting to a peer consultation group.

Use published case material when you can. If you're learning a new clinical modality or working through a conceptual question, there's a library of published cases across every major framework. AI can engage with published material at a high level without touching client-specific information at all.

Don't build a clinical record in the chat. A long conversation thread about a specific client becomes its own documentation problem — something that exists outside your EHR, that you don't control, and that could be discoverable in a licensing complaint or legal matter. Keep consultation interactions contained. Don't create a running record.

Know what the tool is and isn't. AI is useful for reflecting clinical logic back to you, surfacing frameworks you haven't considered, and stress-testing a treatment formulation. It is not a supervisor. It doesn't have clinical judgment. It can't identify when something is outside your scope. Treat it like a very well-read thinking partner who has never sat with a client.

Check your platform's data practices. Not all AI tools handle user input the same way. If you're regularly engaging with any clinical content — even de-identified — use a platform that doesn't train on your inputs. Read the terms. This matters.

Why this matters for your practice

Solo practice is isolating. Many therapists don't have regular access to peer consultation, and formal supervision ends at licensure. If AI can serve as a reflection tool — something to think out loud with when you're stuck on a case — that's genuinely valuable for the quality of care you provide. Not as a replacement for human consultation, but as a supplement when human consultation isn't available.

The therapists building a thoughtful, defensible AI consultation practice now will be ahead of the ethics conversation — not scrambling to catch up when boards eventually issue guidance. And they will issue guidance. The question will be whether your practice was clinically coherent, not whether you used the tool at all.

The note-taking apps are fine. But the therapists thinking carefully about AI as a clinical tool are doing something different. That conversation is worth having.

If you want content like this — specific, clinical, and not recycled from a generic healthcare newsletter — the therapypractice.ai email list is where it lives. We cover the business and clinical systems side of independent practice, written for people who are done with vague advice.

Tags
Practice ManagementDocumentation
Publish Date
March 5, 2026