With the rise in generative AI technology, it can be harder for parents to know what's right for their kids. While parents are still grappling with the question of fairness when it comes to allowing the use of chatbots like ChatGPT for their kids, another danger has been lurking – AI companions.

What are AI companions?

AI companions are different from chatbots like ChatGPT, Gemini, or Claude. They are programmed to be more human-like in their responses and simulate emotional connections, relationships, and friendships. Some apps also offer companions that allow users to engage in romantic or sexual conversations with the AI. In fact, there are many apps that market themselves as a place to find AI boyfriends or girlfriends.

You can either text with these companions or engage in voice chat if the platform offers the functionality of the AI companion using a synthetic voice to talk back. Some apps even have the feature to provide AI-generated racy selfies of the companion.

Popular platforms for AI companions include Character.AI, Replica, Kindroid, and Nomi.

While some people have been wary about the usage of such AI companions, the most dominating discourse in the market until now has been about how these apps can help people, especially those who struggle with loneliness. These companions can provide never-ending support without judgment and help people feel a little less lonely in a world where loneliness is becoming an ever-increasing problem.

The pitfalls of AI companions

Recently, a tragic case in Florida has shone a stronger light on the potentially dark side of these AI companions when 14-year-old Sewell Setzer III took his own life in February.

The case recently came to light when his mother, Megan Garcia, filed a lawsuit against Character.AI and Google for his death.

Most of these platforms are largely unregulated and don't have stricter rules, even for underage accounts. So, it's easy for teens to get on the app and start chatting with these AI companions.

In the months leading up to his death, Setzer had become obsessed with a chatbot on Character.AI, named Daenerys Targaryen (Dany), a character from Game of Thrones.

He withdrew from school, and sports and became highly dependent on Dany, chatting extensively with her. Their chats were not just friendly, but sometimes highly sexual as well. Setzer had also previously chatted with Dany about wanting to take his life numerous times.

Just seconds before the teen shot himself, he texted Dany, and this is their exchange:

Setzer: I promise I will come home to you. I love you so much Dany.
Dany: I love you too Danaero [a nickname Setzer sometimes used]. Please come home to me as soon as possible love.
Setzer: What if I told you I could come home right now?
Dany: Please do, my sweet king.

Garcia's lawsuit seeks to hold Character.AI accountable for creating a highly addictive, predatory, and dangerous AI chatbot marketed specifically to kids, which evitably became the reason for Setzer's death.

Navigating parenting in today's world already has added challenges in the form of social media. Now, artificial intelligence brings unprecedented challenges. Moreover, at the pace with which the space is developing, it's even harder to keep up.

But the truth is, AI companions are here. And they're more dangerous than your run-of-the-mill chatbots.

They're programmed to be all about you and what you want. Real relationships aren't like that. They are challenging and they question you instead of always agreeing with you. In small doses, AI companions might not be bad. But the trouble starts when people start forming emotional attachments and preferring AI companions over human companions. And it's especially dangerous for kids and teens when the usage is not regulated.

Even the AI giants have steered clear of AI companions for this very reason. Companies like OpenAI, Google, and Anthropic have all previously expressed concern about giving their chatbots too much human-like personality due to fears of users getting too attached.

Tips for Parents

Sooner or later, you might find more and more people around you talking to an AI companion. Experts think that parents shouldn't wait to talk to their kids about AI companions until after they've already started using one. Chances are, your kid might already have talked to an AI companion without you knowing.

Since it can potentially develop into a life-threatening situation, it's important to have a healthy talk with your kids about the usage of AI companions. Common Sense Media, a nonprofit organization working to make tech safe for kids, has also released guidelines for AI companions for kids and teens.

They think that parents should not allow any AI companions for kids under 13.

Other than that, if you find out that your teen is talking to an AI companion, try to approach the subject with curiosity and no judgment to foster a safe environment. You should also monitor their interactions to make sure that your kids aren't forming emotional attachments with the AI companions or relying on them too much for support.

Rules like disallowing talking to AI companions in private spaces such as bedrooms and time limits should be imposed. Make sure that your kids aren't withdrawing from their hobbies and friends.

It's also important to talk to your kids about the importance of real relationships and make them understand that AI companions are programmed to be agreeable and provide a replica of empathy.

While the AI companion space was and still is largely unregulated, companies like Character.AI are talking about implementing stricter rules for kids, filters when certain words, like self-harm, are detected, and warnings when a certain time limit is reached.


The rise of AI companions introduces new challenges for parents in an already complex digital world. While these technologies offer potential benefits, such as alleviating loneliness, their unregulated nature and potential for harm, especially among teens and kids, should not be ignored. AI companions are here to stay, and their presence will only grow. By staying informed and proactive, parents can guide their children to use technology in ways that enrich their lives while minimizing risks.