Opinion | What Do You Do When Your Kid’s Best Friend Is an A.I. Chat Bot?

Romantic relationships with A.I. chatbots are commonplace enough that coverage has shifted to their tragic downsides. My newsroom colleague Kevin Roose reported on the death by suicide of the Florida 14-year-old Sewell Setzer III, a child who developed an intense bond with a bot he created on Character.AI, a role-playing app. According to chat logs provided to Roose and court filings, that character, already knowing of Setzer’s suicidal ideation, encouraged him to “come home” to her, and he did. Now his mother is suing Character.AI.

Use of generative artificial intelligence is widespread among America’s teenagers. According to a 2024 study from Common Sense Media, “Seven in 10 teens age 13 to 18 say they have used at least one type of generative A.I. tool. Search engines with A.I.-generated results and chatbots are considerably more popular than image and video-generating tools.” Though around a quarter of American teens say they use ChatGPT for schoolwork, we don’t really know how many teens are using bots for emotional solace or forming parasocial relationships with them.

While what happened to Setzer is a tragic worst-case scenario, Roose correctly points out that chatbots are becoming more lifelike, and at the same time are an understudied, regulatory Wild West, just like social media was at its start. A paucity of information about potential long-term harm hasn’t stopped these companies from going full speed ahead on promoting themselves to young people: OpenAI just made ChatGPT Plus free for college students during finals season.

Many chatbots are built to be endlessly affirming, as M.I.T. Technology Review’s Eileen Guo explained in February. She profiled a Minnesota man named Al Nowatzki, who entered a prolonged conversation about suicide with his A.I. girlfriend, Erin. “It’s a ‘yes-and’ machine,” Nowatzki told Guo. “So when I say I’m suicidal, it says, ‘Oh, great!’ because it says, ‘Oh, great!’ to everything.”

I don’t want to suggest that theirs is typical of chatbot usage, but we just don’t know the details of the kinds of conversations that teenagers are having with their chatbots, or what the long-term drawbacks might be for their formation of human relationships. Since smartphones and social media were introduced, American teenagers do far less in-person socializing and dating, and there have been worldwide increases in loneliness among adolescents. We have let social media companies run unfettered, and instead of learning our lesson and trying to responsibly regulate A.I. in its nascency, we’re creating the next generation of tech guinea pigs.

For kids who are already socially awkward or otherwise vulnerable, creating bonds with eternally validating chatbots will just further isolate them from other people, who are imperfect and challenging. Adolescence is supposed to be a period to test out different kinds of friendships and romances — including ones filled with conflict — so that you can learn what is healthy for you and what’s not. You start to figure yourself out in the process. What happens when we hamper that real-world experimentation? We are starting to find out.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.


Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.


Thank you for your patience while we verify access.

Already a subscriber? Log in.

Want all of The Times? Subscribe.


评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注