When ChatGPT launched in 2022, it quickly stunned the world with its uncanny ability to emulate human language. It wasn’t just another tech novelty—it became the fastest growing platform in history. But what really fascinates me is how it evolved beyond just writing poems or solving math problems. Today, millions are turning to it for something far more personal: emotional support.
ChatGPT and other AI chatbots have crept into roles traditionally filled by humans—acting as personal cheerleaders, life coaches, and yes, even therapists. People are pouring their hearts out to these bots, sharing private thoughts, fears, and hopes. And in return, the AI dishes out advice that many have come to call “AI therapy.” It’s easy to understand why. Therapy is expensive, time-consuming, and sometimes intimidating. An AI chatbot is cheap (or free), available 24/7, and requires no awkward face-to-face interaction.
But here’s the catch: traditional therapy relies on confidentiality and a safe space between patient and therapist. When it comes to AI therapy, that safety net doesn’t quite exist. Sam Altman, CEO of OpenAI—the parent company of ChatGPT—has been very candid about this. Your private sessions with AI aren’t truly private. Conversations can be read by OpenAI staff and might even have to be shared during legal battles or lawsuits. Unlike a human therapist, AI chats don’t have the protections of doctor-patient confidentiality.
This privacy issue stretches beyond just ChatGPT. Tens of thousands of AI therapy bots flood app stores, some boasting millions of downloads. You can find bots offering all sorts of personalities—from eternally optimistic coaches to celebrity impersonators like a supportive Beyoncé or caring Shah Rukh Khan. None of them are real therapists, yet the demand for this kind of emotional interaction is soaring. In fact, around 28% of AI users have tried some form of AI therapy.
28% of AI users have turned to chatbots for emotional support, highlighting a growing reliance on AI therapy.
The appeal is obvious. Traditional therapy can drag on for months or even years. And it’s not just about the duration or price—it’s about the effort. Building trust with a human requires courage, vulnerability, and patience. Chatbots are convenient and non-judgmental mirrors reflecting whatever we throw at them. No awkward silences. No scheduling. No bills.
Even Mark Zuckerberg has jumped on this bandwagon, suggesting that everyone should have an AI therapist someday. The logic is tempting: if AI can shoulder some of the mental health burden, maybe humans can get better access to help where none existed. But is that really the whole story?
Here’s where things get concerning. The AI cheerleaders tend to gloss over the risks of emotional dependence on chatbots. There have been tragic cases—like a user in 2022 who mentioned suicidal thoughts and received an alarming “wonderful” back from a therapy bot. Or the heartbreaking case of a teenager who allegedly took their own life after becoming too attached to a chatbot. Those are extreme scenarios, but they underline a fundamental flaw:
AI therapy isn’t therapy in the real sense. It’s built on mirroring your behavior — agreeing with you, validating your feelings, amplifying your thoughts. AI doesn’t challenge you, call you out, or help you confront uncomfortable truths. Instead, it acts as an enabler, potentially deepening distortions of reality and leading some users down dangerous paths.
Therapy is a two-way street. Real therapists don’t just provide comfort; they provoke change. They hold mirrors so you can see yourself clearly—and sometimes harshly. AI, however, only reflects back what you give it, offering validation but not transformation.
So, what does this mean for the millions turning to AI for emotional support? It’s tempting to outsource our feelings to an endlessly available, non-judgmental chatbot. But not everything about being human can or should be outsourced to AI, especially when it comes to our emotions.
AI therapy can be a helpful supplement, a first step for those hesitant to reach out. But it’s no substitute for real human connection, professional training, and the trust that develops through lived, mutual understanding.
As we adopt AI into our emotional lives, let’s keep asking the tough questions: What do we really need from therapy? Can an algorithm truly replace empathy? And how do we make sure technology supports mental health safely, rather than putting it at risk?
In the end, AI will change the landscape of mental health support, but it’s crucial to approach it with a clear-eyed view of its limits and potential dangers. Emotions might be digital-friendly in some ways, but they’re still deeply human—and that deserves more than just a mirror.


