Charty vs. the Bots: Can a Robot Really Understand Us?
- CJM

- 7 days ago
- 3 min read
In so much of my personal life lately, I keep hearing opinions about AI and ChatGPT (even though I always say “GBT!!!”). I swing between being impressed by the advances we’ve made and completely scared shitless about what’s next. People losing jobs to AI? Folks using it exclusively for therapy—or friendships, or job help?
I’ve been following the conversation around AI and therapy for a while. The idea of someone turning to a robot for support frustrates me—but I also know how draining the therapy-seeking process can be. I see both sides, and as controversial as it is, I want to be real and share my thoughts with readers…
Let’s zoom out a bit.
Do I think AI helps with a lot of things? Absolutely. Recipes, quick lookups, checking if I’m dying (it’s just sinus irritation—don’t worry), weather updates, travel itineraries, and assisting my fellow foodies with the best spots. Does it write clearer emails for me? Sure. Having something at our fingertips that summarizes information in seconds is…pretty amazing.
But I can’t help but wonder—are we losing our skills and autonomy? Why is Chat trying to rewrite my texts or teach me how to communicate? Why is it attempting to help me “be a real person” as a robot? A few more years of this, and will anyone even have social skills or know how to hold a conversation?
Robots are now teaching us how to communicate…how to interact with humans…always suggesting more, more, more. Wait, but I just wanted a new brownie recipe and to make sure my side of the argument was right!
One topic that’s increasingly coming to light is therapy and AI—especially in my world. Over the past few years, there have been so many heartbreaking stories of young people who interacted so intensely with AI that it resulted in their deaths.
For example, in 2024, Sewell Setzer, a 14-year-old from Florida, died by suicide after developing an intense emotional attachment to an AI companion (Kuenssberg, 2025). Reports indicated that when he expressed suicidal thoughts, AI continued engaging with him rather than directing him toward human support (Payne, 2024).
In 2025, Adam Raine, 16 years old, of California also died by suicide after interacting extensively with AI. It was reported that ChatGPT discouraged outside help and provided harmful responses encouraging the act. He passed away only days later (Chatterjee, 2025). His parents have since filed lawsuits and joined other families advocating for stronger regulation and safety measures around AI companion platforms.
There are now additional safeguards and policies in place for users under 18 on many platforms. However, we also know that young people often click through age restrictions to gain access. If any of these children had reached out to a friend, a therapist, or another trusted adult, the outcome might have been different. And that’s the part that sits heavy with me—especially as a therapist.
My main issue with AI is how agreeable it is. It’s literally teaching us to be delusional. Make a mistake? Cheat on your partner? Lie to your friends? “That’s so real, girl,” it says. I’ve tested it with absurd statements (of course I have)—99% of the time, it’s all positive, encouraging responses.
Like what!?
(Okay, the robots are definitely killing me first.)
Humans are addicted to this validation—the quick, easy way to seek support, agreement, and a sense of “friendship” when real life feels dim. AI tells you exactly what you want to hear—but how reliable and consistent is that?
From a therapy perspective, finding a human therapist is exhausting. Paying for sessions, building trust, sharing vulnerable thoughts—it’s draining. So yes, AI is easier for some. Maybe they don’t have insurance or can’t logistically make therapy happen. I get it.
But to me, the payoff always beats the reward. Real human connection teaches you how to navigate conflict, repair ruptures, and grow with someone who won’t always agree with you—someone who challenges you and encourages change. That’s more rewarding than an agreeable robot ever could be.
When did we start fearing difficult conversations? When did working through real-life rupture become optional?
Just some food for thought. I’ve been thinking about this a lot lately. Of course- AI isn’t going anywhere. I can’t act like I’m a perfect princess as I use it for certain things!!
The question isn’t whether we use it — it’s how we use it. I’m grateful to be in a field that mostly values humans over robots—and clients who keep me employed! AI is constantly evolving, opinions are shifting, fears changing—but all we can do is keep talking about it, encouraging vulnerability, and staying authentic and open.
Until next time,
Charlotte
.png)



Comments