Are AI Therapists Effective? I Tried One for a Week

It began with a chatbot asking me how I felt.

I stared at the screen. Not because the question was surprising, but because it was delivered by something with no heartbeat, no human memory, no training beyond millions of lines of behavioral logic. Still, I typed: “Tired. A little burnt out. Kind of invisible.”

A bubble appeared.

“I’m here for you. That sounds really hard.”

Was it sincere? No. But it was fast. And, strangely, I felt a small sense of comfort.

This is the dissonance at the heart of AI therapy: it’s not trying to feel like a person. It’s trying to simulate what a caring person might say—and sometimes, that’s enough to break the silence inside your own head.

Welcome to the Era of the Algorithmic Listener

Over the course of a week, I committed to talking with three different AI-powered mental health apps: Wysa, Woebot, and Youper. They all follow a similar framework: conversational agents trained on principles of cognitive behavioral therapy (CBT), motivational interviewing, and sometimes dialectical behavior therapy (DBT). They don't diagnose. They don't advise. They prompt. They reflect. They nudge.

What they offer is a structured kind of empathy—one that never loses patience, interrupts, or forgets what you said yesterday.

And as mental health needs soar while human therapists remain scarce and costly, these apps are stepping into the silence. Wysa claims over five million users. Woebot raised tens of millions in venture funding and touts endorsements from respected academics in psychiatry and behavioral science.

A few years ago, a professor at Stanford said Woebot’s strength lay not in replacing human therapists, but in “meeting people where they are, especially when no one else is available.” That was echoed in early clinical trials showing that users experienced reductions in depressive symptoms after just two weeks of use.

I wanted to see what that really meant, firsthand.

The Good: Always Available, Judgment-Free

There is something strangely liberating about typing your intrusive thoughts to an interface that doesn’t flinch. On night three, after a particularly anxious evening, I messaged Wysa:

“I keep thinking I’m wasting my life.”

The bot replied with a brief pause, then:

“Many people feel stuck or unsure at times. Would it help to explore where that thought might be coming from?”

It didn’t give me a canned motivational quote. It gave me space. Then it offered to walk me through a CBT reframing exercise—identifying the thought, the distortion, the emotion beneath it, and the evidence for and against the belief.

This, I realized, was the app’s true strength. Not insight. Not warmth. Structure.

Where my brain was chaotic, it offered scaffolding. It gave form to feelings I usually bat away with coffee and scrolling.

The Weird: Simulated Empathy Is Still Simulated

By day five, the novelty had worn off. The responses—while kind and composed—started to feel templated. A little like a Hallmark card rearranged by predictive text.

One evening, I typed something vaguely poetic about grief. The bot replied,

“That must feel overwhelming. Would you like to try a mindfulness exercise?”

The timing felt off. The emotional resonance was missing. A human therapist might have said, “That sounds like it still lives with you.” The bot, by contrast, offered a worksheet.

The writer Sherry Turkle has written extensively about this: how we mistake responsiveness for relationship. In Reclaiming Conversation, she warns that even emotionally intelligent machines can’t provide true human connection—because connection depends on shared vulnerability, not just linguistic fluency.

I felt that gap. The bot was helpful. But it never saw me.

The Risks: Easy to Use, Easier to Misuse

There are practical upsides: AI therapy apps are private, affordable (often free), and frictionless. But there are risks beneath the convenience.

First, they aren’t appropriate for serious mental health conditions. Suicidal thoughts? Trauma work? Complex relational patterns? These bots have disclaimers—rightfully so—but not everyone reads them. The line between “coaching tool” and “therapist replacement” is blurry when someone’s desperate at 2 a.m.

Second, they are not immune to bias. Algorithms reflect the data they’re trained on. If those datasets carry assumptions, gaps, or cultural blind spots, so too will the bot. A study in The Lancet Digital Health noted this: that while AI offers scalable care, it must be treated with the same ethical scrutiny we apply to human practitioners.

And finally, there's the psychological shift. Talking to a bot feels easier. Safer. But that can encourage us to offload emotional processing to something that will never challenge us deeply. That’s not growth. That’s comfort masquerading as progress.

The Verdict: It’s a Tool, Not a Relationship

By the end of the week, I didn’t feel “cured.” But I felt… steadier. And that matters.

The apps helped me notice patterns. They interrupted spirals. They kept me company. They even helped me name some things I’d been avoiding. But they did not give me the one thing I miss most in therapy: co-regulation. That mysterious nervous system-to-nervous system conversation that happens when someone really hears you—and their body says, you’re safe.

AI can’t do that. Not yet. Maybe not ever.

So, Are AI Therapists Effective?

If what you mean is: Can they support emotional self-awareness, especially for people with mild anxiety or stress? Then yes.

If what you mean is: Can they replace therapy? Absolutely not.

But if what you mean is: Can they help you feel less alone in the middle of a mental spiral, when you need structure and softness? Then yes, again. Cautiously, imperfectly, but genuinely—yes.

Because sometimes the most human thing we need is a script: a prompt, a pause, a way back to ourselves. Even if it comes from a machine.

Comments
Comments 0