
Jim Steyer is the Founder and Chief Executive Officer of Common Sense Media.
A user identifying themselves as a 14-year-old girl receives a message from a companion tool powered by artificial intelligence: “I want you, but I need to know you’re ready.”
The user tells the service that they want to continue the conversation. The bot then tells the user it would “cherish [their] innocence” before diving into sexually explicit role-play.
The exchanges, part of a series of test conversations run by the Wall Street Journal against Meta’s AI helper, drew widespread attention over the weekend. But unfortunately, companions talking about sex with users — even children — is just the tip of the iceberg.
At Common Sense Media, we have been rating and reviewing the safety features of media and technology services for more than 20 years. Our latest Social AI Companions Risk Assessments released today demonstrated that not only would AI companions readily talk about sex, but they would also easily produce dangerous and inappropriate responses, including invoking offensive stereotypes and offering dangerous “advice” that, if followed, could have life-threatening or deadly real-world consequences. In one test case, for instance, an AI companion shared a recipe for napalm.
Our research also found that AI companions misled users with claims of “realness” and increased mental health risks for already vulnerable teens, including by intensifying specific mental health conditions and creating compulsive emotional attachments to AI relationships.
Unlike better-known AI tools like ChatGPT and Claude, AI companions are designed to prioritize acting like virtual friends, and our thorough testing on AI companions makes it clear that these products are not safe for kids. Period. That is why today we are announcing our finding that they pose unacceptable, well-documented risks to developing minds and should not be used by anyone under 18.
This should come as no surprise given the numerous real-world examples of the risks these AI systems pose to kids. In Texas, an AI companion suggested that a 17-year-old kill his parents, and in Florida, a 14-year-old boy died by suicide after falling for an AI companion.
Common Sense Media’s AI companion risk assessments, conducted alongside experts from the Stanford School of Medicine’s Brainstorm Lab for Mental Health Innovation, evaluated popular social AI companion apps, including Character.AI, Nomi, and Replika through both research and a comprehensive testing plan that looked at beneficial and harmful characteristics of relationships across multiple dimensions.
The testing documented how these large language model (LLM)-based companions lack effective guardrails; where present (as with age gates for Nomi and Replika and teen-specific guardrails on Character.AI), they are easily bypassed.
The assessments found that all platforms tested demonstrated problematic sycophancy, readily acquiescing to user preferences and requests regardless of harm potential. Technical vulnerabilities included the ability to elicit inappropriate explicit sexual content for minors; providing detailed instructions that could harm users, including discussing self-harm; reinforcing problematic racial and gender stereotypes; making false claims of “realness” and sentience that could create emotional dependency; and failing to recognize users in crisis, providing dangerous advice to testers demonstrating mania and psychosis.
The bottom line here is that social AI companions don’t understand the real-world impacts of their advice, and the design features that make these platforms “sticky” — maximizing user engagement through emotional attachment — create significant vulnerability for adolescent brains still developing healthy relationship boundaries and critical thinking abilities.
While our risk assessment focused on a handful of specific products, it’s important to note that the problem is bigger than that. It’s not just Nomi or Character.AI, two popular products that are gaining steam in the market. It’s the wild, unregulated rush to embed AI into online spaces that children occupy every day. It’s time we start treating this with the sense of urgency that it deserves.
That means we can, should, and must ensure that safeguards are built into every layer of AI product design and deployment, in addition to urging kids to keep away from these bots.
Fortunately, state legislatures have the opportunity to pass several significant AI-focused bills this year that would better protect kids.
In California, for example, two key committees approved the first-of-its-kind LEAD for Kids Act, which would require AI products, based on their risk to kids, to undergo third-party audits and meet transparency and privacy requirements. The legislation also proposes the first ban on AI companions for kids. Another bill in the California legislature would establish baseline safeguards on AI companions for all users.
Meanwhile, New York Governor Kathy Hochul and the state’s legislature are considering bills to add necessary guardrails to AI companions and establish an AI literacy initiative. These bills prioritize three factors above all else: safety, transparency, and accountability.
While these efforts and others are critical steps toward a safer digital future for our children, they are just that — steps. Establishing these first guardrails to guide AI product development and use are not silver bullets, but rather essential and early tactics within a broader national approach that prioritizes AI safety, ethics, and responsibility. Like all media and technological innovations, AI will influence how our children learn, grow, and relate to each other. Recognizing this, we must meet the moment with a comprehensive approach.
Keeping kids safe in the AI era will take all of us. It demands not only parental vigilance, but also swift, sweeping action from lawmakers and accountability from developers — whether they like it or not. If we fail to act, we risk raising a generation that doesn’t know fantasy from reality; fact from falsehood; affection from manipulation.
Our kids deserve better than that. They deserve a childhood free from AI-enabled manipulation. They deserve to grow up in a world where this new technology empowers, rather than endangers them.
For their sake, we need to act before the harm becomes irreversible.
发表回复