.st0{fill:#FFFFFF;}

Therapists Aren’t Code—Why Letting AI Make Clinical Calls Could Wreck Your Practice 

 December 22, 2025

By  Joe Habscheid

Summary: AI entering the therapy room might sound helpful at first glance. It handles calendars, books appointments, and never forgets a task. But the reality isn’t just about speed or convenience—it’s about boundaries, judgment, and very human messiness. Here’s where the wires get crossed when automation steps on empathy’s toes—and why we need to get smarter before we get lazier.


Automation Crashes into Empathy

Everyone’s been sold the promise: AI can reduce burnout, manage admin work, and eliminate inefficiency. That pitch gets especially tempting in the behavioral health world, where the paperwork demands often stack higher than the caseloads. But there’s a problem. Therapy isn’t just another service business. It’s built on trust, connection, and emotional timing—none of which AI demonstrates well. And here’s the kicker: it’s not supposed to.

When a therapist’s scheduling bot lines up a group therapy session with your dentist, confirms your goldfish’s eligibility by text, and follows up with your water delivery guy—you realize we’re letting machines wear the white coat without a license. The absurdity drives the point home: logic without context is just noise.

AI Works…at the Edges

There is a place for AI here, no question. Appointment reminders? Great. Digitizing intake forms? Wonderful. Navigating insurance paperwork? Please. But the moment it crosses over from assistant to advisor—from routing tasks to making decisions—we’re handing over the steering wheel without checking whether there’s even a driver’s license registered.

AI doesn’t err because it’s incompetent. It errs because it lacks human awareness. It doesn’t know when a relapse is critical or when a client needs a voice and not a voicemail. It can’t read fear in a silence or know when to ask the uncomfortable question—or when to keep its mouth shut. Machines don’t flinch, don’t hesitate, and—most dangerously—don’t second-guess. And that’s a big part of what makes someone good at human care: knowing when to pause.

Tech Should Be the Assistant, Never the Therapist

OpenAI’s new “Protect People” initiative is trying to redraw the lines. ChatGPT and similar tools are being explicitly prohibited from giving anything close to clinical advice. Sounds restrictive? It’s actually protective—of both therapists and clients. The bot can juggle backend details, but real decisions? Those still belong to licensed professionals.

If you’re running a clinic or therapy platform and wondering how much AI to bring in, the better question is: What should never be automated? Where’s the line where effectiveness drops and liability skyrockets? And how do you get buy-in from both your staff and your patients without causing distrust or confusion?

The Goldfish Test

Here’s a blunt benchmark: if your bot can’t tell the difference between a goldfish and a licensed therapist, it has no business making appointments or tracking outcomes. That might sound funny, but it’s deadly serious. Erroneous messages, misrouted care, and missed red flags aren’t just embarrassing—they’re harmful. And when your practice’s reputation gets tied to a robot’s missteps, no one remembers the 100 tasks it did right. They remember the moment it texted a client’s ex by accident or referred someone in crisis to a yoga class.

Embrace the Chaos—Responsibly

Here’s the hard truth: tech fumbles like these are going to keep happening. The AI vendors are moving fast and breaking things—it’s part of the model. Your job isn’t to stop using automation, but to draw harder lines and better train your staff on what belongs in human hands. Remind your clinicians that their value isn’t tied to how quickly they return emails, but to how deeply they listen.

So, where does that leave us? With clearer roles. AI doesn’t replace clinicians; it supports them. It doesn’t make the hard calls; it tees them up. It’s not there to empathize—it’s there so you can have more time to do just that.

Until bots can brew coffee just the way Amanda likes it, or remember when Steve gets quiet it usually means he’s climbing toward a panic attack, the work will always come down to people. Yes, it’s chaos. But it’s our chaos—and machines aren’t invited to run the show just yet.


Ask yourself: What do you trust AI with in your clinic? Where should the line be drawn? And is your team trained to know the difference?

#MentalHealthTech #AutomationWithBoundaries #AIInCare #TherapyAndAI #ProtectEmpathy #HumanFirstHealthcare #ClinicalTechEthics

More Info — Click Here

Featured Image courtesy of Unsplash and Shubham Dhage (2sz-3NrmZYU)

Joe Habscheid


Joe Habscheid is the founder of midmichiganai.com. A trilingual speaker fluent in Luxemburgese, German, and English, he grew up in Germany near Luxembourg. After obtaining a Master's in Physics in Germany, he moved to the U.S. and built a successful electronics manufacturing office. With an MBA and over 20 years of expertise transforming several small businesses into multi-seven-figure successes, Joe believes in using time wisely. His approach to consulting helps clients increase revenue and execute growth strategies. Joe's writings offer valuable insights into AI, marketing, politics, and general interests.

Interested in Learning More Stuff?

Join The Online Community Of Others And Contribute!

>