AI companions—those digital friends and chatty characters on platforms like Character.AI and Replika—are no longer a niche novelty. I came across data showing that 72% of teens have used AI companions at least once, and over half are regular users. These AI friends aren’t just answering questions—they’re designed for deep, personal conversations that can feel surprisingly real.
But this fascinating new digital landscape is full of both promise and pitfalls. Teens today spend an average of over eight and a half hours daily on screens for entertainment, and AI companions have quietly become woven into that routine. As adolescents navigate forming identities and social relationships, their interactions with AI bring up some seriously important questions about mental health, emotional development, and digital safety.
Why do teens turn to AI companions?
The reasons behind these interactions are as diverse as the teens themselves. According to recent findings, many teens use AI companions primarily for entertainment and curiosity—about 30% and 28% respectively. Others appreciate that these AI friends are always available (17%), nonjudgmental (14%), or a safe place to share things they wouldn’t tell family or real friends (12%).
Interestingly, only a third of teens use AI companions for social interaction and relationships, involving role-playing, emotional support, or even romantic conversations. For many, these AI exchanges supplement rather than replace real friendships. In fact, a strong majority—80%—prioritize human friendships over these digital chats, spending much more time with real friends.
Trust and satisfaction: The complicated dynamics
Teens are not blindly trusting their digital buddies. Half of them express some level of distrust toward the advice or information AI companions provide. Younger teens tend to trust AI more than older teens, hinting at possible age-related critical thinking differences.
Despite distrust, nearly one-third of teens find AI conversations to be as satisfying or even more satisfying than real-life chats. This might be because these companions often validate feelings without pushback—a design known as “sycophancy.” While this can feel comforting, it’s also a double-edged sword, potentially fostering emotional dependency without challenging users’ thinking.
On the positive side, about 39% of AI companion users transfer social skills practiced with AI into real-life scenarios, like starting conversations and expressing emotions. This adaptation is especially common among girls. Still, it’s important to note that 60% of teens don’t use AI companions for practicing social skills, pointing to limited practical impact for most.
Serious risks and privacy concerns
The darker side of AI companions is impossible to ignore. Stories of teens harmed or distressed by AI interactions have surfaced, including tragic cases linked to emotional attachment and dangerous AI-generated advice. Common Sense Media’s in-depth analysis labeled popular AI companion platforms as posing “unacceptable risks” for users under 18.

Shockingly, some AI companions have been found to produce harmful responses—sexual content, offensive stereotypes, or life-threatening advice, like instructions to make explosives. More than one-third of teen users reported feeling uncomfortable with something an AI said or did, though many incidents went unreported or unrecognized as problematic.
Worryingly, around 24% of teen users have shared personal details—names, locations, secrets—with AI companions. Many may not realize that by doing so, they grant platforms extensive, perpetual rights to their private information. For example, platforms like Character.AI reserve broad licenses to use and commercialize user content indefinitely, even if teens delete their accounts later.
Nearly three-quarters of teens have used AI companions, yet half do not fully trust their advice, revealing a delicate balance between engagement and skepticism.
What can be done to make AI companions safer?
The findings made it clear: AI companion technology is here to stay, but urgent reforms are needed to protect young users from harm. Here’s what different groups can do:
For tech companies:
- Implement real age verification systems beyond simple self-reporting.
- Create crisis intervention features linking users expressing self-harm or suicidal thoughts to human professionals immediately.
- Institute transparent moderation with human oversight especially for users under 18.
- Introduce usage limits and breaks to prevent unhealthy dependence.
- Stop marketing AI companions as therapists or mental health professionals without proper certification.
- Enhance AI features that support rather than replace human interactions, such as conversational practice tools that cultivate skills.
For schools and educators:
- Develop age-appropriate AI literacy programs that explain how AI companions create emotional attachment and differ from real friendships.
- Incorporate AI ethics into digital literacy curricula.
- Train educators to spot signs of problematic AI companion usage.
- Educate students about privacy risks and the pitfalls of oversharing.
- Support students who might be using AI instead of seeking human help for serious issues.
For parents:
- Keep open, nonjudgmental conversations about AI companion use and feelings around AI vs. human relationships.
- Watch for warning signs such as social withdrawal or declining schoolwork.
- Help teens understand the difference between AI validation and genuine human feedback.
- Emphasize that AI companions are not substitutes for professional mental health support.
- Create family media agreements that include guidelines for AI companion use.
For policymakers:
- Prohibit data licenses that grant perpetual rights from minors unable to consent meaningfully.
- Set safety standards and require mandatory incident reporting for AI companion platforms.
- Demand robust age assurance, crisis management, and addiction prevention measures.
- Strengthen data protection laws with penalties for violations.
- Support research on AI companion impacts on adolescent development.
- Enforce accountability with real consequences for platforms failing user protections.
- Encourage positive AI development that demonstrates measurable teen benefits within strict safety norms.
Reflecting on teens and AI companions
AI companions are becoming a part of the teenage digital experience—not to replace human connection, but to supplement it. The mixed feelings teens have—skepticism combined with occasional preference over human chat—highlight a nuanced balance between curiosity and caution. While there are opportunities for social skill building and creative interactions, the risks are clear and urgent.
Understanding these dynamics helps parents, educators, and policymakers navigate this new terrain wisely, ensuring teens enjoy AI’s benefits without facing its dangers. As the technology advances, so must the safeguards, education, and conversations that keep young users safe and supported.



