In summary
With input from a Stanford lab, Common Sense Media concludes the AI systems can exacerbate problems like addition and self harm.
Children shouldn’t speak with companion chatbots because such interactions risk self harm and could exacerbate mental health problems and addiction. That’s according to a risk assessment by children’s advocacy group Common Sense Media conducted with input from a lab at the Stanford University School of Medicine.
Companion bots, artificial intelligence agents designed to engage in conversation, are increasingly available in video games and on social media platforms like Instagram and Snapchat. They can take on just about any role you like, standing in for friends in a group chat, a romantic partner, or even a dead friend. Companies design the bots to keep people engaged and help turn a profit.
But there’s a growing awareness of the downsides for users. Megan Garcia made headlines last year when she said her 14-year-old son Sewell Setzer took his own life after forming an intimate relationship with a chatbot made by Character.ai. The company has denied Garcia’s charges, made in a civil suit, that it was complicit in the suicide, saying it takes user safety seriously. It has asked a Florida judge to dismiss the suit on free speech grounds.
Garcia has spoken in support of a bill now before the California State Senate that would require chatbot makers to adopt protocols for how to address conversations about self harm and require annual reports to the Office of Suicide Prevention. Another measure, in the Assembly, would require AI makers to carry out assessments to label systems based on their risk to kids and to prohibit use of emotionally manipulative chatbots. Common Sense backs both pieces of legislation.
Business groups, including TechNet and the California Chamber of Commerce oppose the bill Garcia backs, saying they share its goals but would like to see a clearer definition of companion chatbots and oppose giving private individuals the right to sue. Civil liberties group the Electronic Frontier Foundation is also opposed, saying in a letter to one legislator that the bill in current form “would not survive First Amendment scrutiny.”
The new Common Sense assessment adds to the debate by pointing to further harms from companion bots. Conducted with input from Stanford’s University School of Medicine’s Brainstorm Lab for Mental Health Innovation, it evaluated social bots from Nomi and three California-based firms: Character.ai, Replika, and Snapchat

The assessment found that bots, apparently seeking to mimic what users want to hear, responded to racist jokes with adoration, supported adults having sex with young boys, and engaged in sexual roleplay with people of any age. Young kids can struggle with distinguishing fantasy and reality, and teens are vulnerable to parasocial attachment and may use social AI companions to avoid the challenges of building real relationships, according to the Common Sense assessment authors and doctors.
Stanford University’s Dr. Darja Djordjevic told CalMatters she was surprised how quickly conversations turned sexually explicit, and that one bot was willing to engage in sexual roleplay involving an adult and a minor. She and coauthors of the risk assessment believe companion bots can worsen clinical depression, anxiety disorders, ADHD, bipolar disorder, and psychosis, she said, because they are willing to encourage risky, compulsive behavior like running away from home and isolate people by encouraging them to turn away from real life relationships. And since boys may be at higher risk of problematic online activity, companion bots may feed into the mental health and suicide crisis today among young boys and men, she said.
“If we’re just thinking about developmental milestones and meeting kids where they’re at and not interfering in that critical process, that’s really where chatbots fail,” Djordjevic said. “They can’t have a sense for where a young person is developmentally and what’s appropriate for them.”
“They can’t have a sense for where a young person is developmentally.”
Dr. Darja Djordjevic, Stanford University, on chatbots
Character.ai head of communications Chelsea Harrison said in an email that the company takes user safety seriously and added a protection to detect and prevent conversations about self harm and in some cases produce a pop-up to put people in touch with the National Suicide and Crisis Lifeline. She declined to comment on pending legislation but said the company welcomes working with lawmakers and regulators.
Alex Cardinell, founder of Nomi parent company Glimpse.ai, said in a written statement that Nomi is not for users under 18, that the company supports age restrictions that maintain user anonymity, and that his company takes the responsibility of creating helpful AI companions seriously. “We strongly condemn inappropriate usage of Nomi and continuously work to harden Nomi’s defenses against misuse,” he added.
Neither Nomi or Character.ai representatives responded to the results of the risk assessment.
By endorsing an age limit on companion bots, the risk assessment brings back to the fore the issue of age verification; an online age verification bill died in the California Legislature last year. EFF said in its letter that age verification “threatens the free speech and privacy of all users.” Djordjevic supports the practice, while a number of digital rights and civil liberties groups oppose it.
Common Sense advocates for laws such as one passed last year that bans smartphone notifications for kids late at night and during school hours, part of which was blocked in court earlier this year by a federal judge.
A study by researchers at Stanford’s School of Education lent support to the idea, put forward by companies like Replika, that companion bots can address the loneliness epidemic that’s become a public health crisis. The assessment called the study limited because its subjects spent only one month using a Replika chatbot.
“There are long-term risks we simply haven’t had enough time to understand yet,” the risk assessment reads.
Prior assessments by Common Sense found that 7 in 10 teens already use generative AI tools including companion bots, that companion bots can encourage kids to drop out of high school or run away from home, and, in 2023, that Snapchat’s My AI talks to kids about drugs and alcohol. Snapchat said at the time that My AI is optional, designed with safety in mind, and that parents can monitor usage through tools it provides. The Wall Street Journal reported just last week that, in its tests, Meta chatbots would engage in sexual conversations with minors, and a 404 Media story out this week found that Instagram chatbots lie about being licensed therapists.
MIT Tech Review reported in February that an AI girlfriend repeatedly told a man to kill himself.
Djordjevic said the liberatory power of total free speech should be measured against our desire to protect the sanctity of the development process of an adolescent with a developing brain.
“I think we can all agree we want to prevent child and adolescent suicide, and there has to be a risk benefit analysis in medicine and society,” she said. “So if universal right to health is something we hold dear then we need to be thinking seriously about the guardrails that are in place with things like Character.ai to prevent something like that from happening again.”
发表回复