Character.AI is taking new steps to enforce child safety on its platform. The chatbot company will no longer allow teenagers to engage in back-and-forth conversations with its AI-generated characters, parent company Character Technologies announced on Wednesday.
“We do not take this step of removing open-ended Character chat lightly – but we do think that it’s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology,” the company said in its statement.
The move comes after a string of lawsuits alleged the app played a role in suicide and mental health issues among teens. Character.AI is a conversational AI company founded in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas.
READ: Meta responds to teen safety concerns with AI safeguards (
The platform enables users to design and interact with virtual characters that have distinct personalities, voices, and backstories, powered by large language models, though the company has not publicly disclosed specific model details. Its mission is to make conversational AI accessible and creative for everyone, a goal reflected in its diverse community of millions of users who employ the service for storytelling, education, and companionship.
The company launched its public beta in 2022 and has since grown rapidly, introducing features such as subscription plans and social tools. However, this growth has also brought challenges, including moderation issues, regulatory scrutiny, and the high costs of operating large-scale AI systems. Despite these complexities, Character.AI remains one of the most influential platforms shaping personality-driven AI in 2025.
Character.AI will make the change by Nov. 25, and teens will have a two-hour chat limit in the meantime. Instead of open-ended conversations, teens under 18 will be able to create videos, stories and streams with characters.
The decision by Character.AI to restrict open-ended chats for teens marks a significant turning point in the regulation and ethics of conversational AI. It reflects a growing awareness within the tech industry that powerful generative systems, while creative and engaging, can also pose emotional and psychological risks, especially to younger users.
This sets a precedent for other platforms to implement clearer age-based restrictions and ethical safeguards, moving beyond reactive moderation toward proactive governance. The decision also underscores the growing societal expectation that AI systems must not only entertain but also uphold human well-being.
For Character.AI, the challenge will be maintaining its appeal and creative freedom while ensuring that interaction with AI remains healthy and developmentally appropriate. This step represents a necessary evolution in the AI landscape—one that prioritizes ethical responsibility over unrestricted growth and begins to define what safe, human-centered AI interaction should look like in the years ahead.


