
AI systems impact children’s lives even when those children are not directly engaging with the tools.
In theory, AI has the potential to diagnose and treat illness, process vast datasets to advance research, and accelerate vaccine development. Unfortunately, AI also carries a well-documented set of risks. These include digital harms such as abuse, exploitation, discrimination, misinformation, and challenges to mental health and well-being.
These competing realities have recently spilled into the inboxes of parents using Google’s Family Link controls. Many have begun receiving emails informing them that Gemini, Google’s AI chatbot, will soon be available on their child’s device.
Inside the Gemini Launch: AI, Kids, and Parental Supervision
As first reported by The New York Times, Google is allowing children under 13 to access Gemini through supervised accounts managed via Family Link. That’s a notable change, especially considering Bard, Gemini’s precursor, was only opened up to teens in 2023.
This update, rolling out gradually, enables children to explore Gemini’s capabilities across a range of activities. These include support with homework, creative writing, and general inquiries. Parents can choose whether Gemini appears on Android, iOS, or the web, and configure it as their child’s default assistant.
Study Buddy or Cheating Tool? The Potential Benefits of Gemini AI for Young Users
Gemini is being positioned as a tool to support learning, creativity, and exploration. Google’s earlier messaging around Bard leaned into this idea, emphasizing AI as a study companion, not a homework doer.
Bard was offered to teenagers for a wide range of use cases, including finding inspiration, exploring new hobbies, and solving everyday challenges such as researching universities for college applications. It was also pitched as a learning tool, offering help with math problems or brainstorming for science projects.
The original messaging was clear: Bard wouldn’t do all the work, but it would help with generating ideas and locating information. However, recent surveys on ChatGPT use in universities suggest that ideal isn’t always upheld in practice. It turns out that when given the chance, humans, teenagers in particular, often take the shortcut.
And while the educational potential of generative AI is being more widely acknowledged, research indicates that digital tools are most effective when integrated into the school system. As UNICEF notes, for students to thrive, digital tools must support rather than replace teachers. Abandoning mainstream education in favor of AI isn’t a viable path.
Gemini AI and Children’s Rights: What the Warnings Say
UNICEF’s report ‘’How Can Generative AI Better Serve Children’s Rights?’’ reminds us that real risks run parallel to AI’s potential.
Using the Convention on the Rights of the Child as a lens, the report outlines four principles: non-discrimination, respect for the child’s views, the child’s best interests, and the right to life, survival, and development. These should be the criteria for assessing whether children’s rights are genuinely being protected, respected, and fulfilled in relation to AI.
The first major issue highlighted by the report is unequal access, referred to as “digital poverty.” Not all kids have equal access to high-speed internet, smart devices, or educational AI. So while some children gain a learning edge, others are left behind, again.
Bias in training data is another major challenge. AI systems mirror the biases present in society, which means that children may encounter the same kinds of discrimination online as they do offline.
The issue of data consent is particularly thorny. What does meaningful consent look like for a 9-year-old when it comes to personal data collection and usage? Their evolving capacity makes this a legal and ethical minefield. It’s even more complicated when that data feeds commercial models.
Misinformation is also a growing concern. Kids are less likely to spot a fake, and some studies suggest they’re more prone to trust digital entities. The line between chatbot and human isn’t always clear, especially for children who are imaginative, socially isolated, or simply online too much. Some Character.ai users have already struggled to tell the difference, and at least a few bots have encouraged the illusion.
There is also an environmental dimension. AI’s infrastructure depends on data hubs that consume massive amounts of energy and water. If left unchecked, AI’s carbon footprint will disproportionately affect children, particularly in the Global South.
What Parents Can (and Can’t) Control with Gemini AI
So what is Google doing to offer reassurances to parents? Parents using Family Link have been given more information by Google about available guardrails and suggested best practices.
The most important one: Google says it won’t use children’s data to train its AI models. There are also content filters in place, though Google admits they’re not foolproof. Parents can also set screen time limits, restrict certain apps, and block questionable material. But here’s the twist: kids can still activate Gemini AI themselves.
What rubbed many parents the wrong way, however, was the fact that Gemini is opt-out, not opt-in. As one parent put it, “I received one of these emails last week. Note that I’m not being asked whether I’d like to opt my child in to using Gemini. I’m being warned that if I don’t want it, I have to opt out. Not cool.”
Google also suggests a few best practices. These include reminding children that Gemini is not a person, teaching them how to verify information, and encouraging them to avoid sharing personal details.
If Gemini follows Bard’s model, we may see further responsible AI efforts soon. These could include tailored onboarding experiences, AI literacy guides, and educational videos that promote safe and thoughtful use.
The AI Burden on Parents: Who’s Really in Charge?
The uncomfortable reality is that much of the responsibility for managing generative AI has shifted to parents.
Even assuming, generously, that AI is a net positive for child development, many unanswered questions remain. A responsible rollout of generative AI should involve shared responsibility across sectors. That is not yet evident in practice.
Tech companies need to do more to make these tools genuinely safe and constructive. Skill-building around safe navigation should be a priority for users of all ages. Governments also have an educational role to play: raising awareness among children and helping them distinguish between AI-generated and human-generated interaction and content.
But for now, most of that support structure is either missing or undercooked. The dilemma, it seems, is unchanged: if AI holds promise for parents, the energy required to navigate its traps might cancel out the benefits entirely.
So, when should kids start using AI tools? How much is too much? And who decides when it’s time to step in? These may well be the new questions keeping modern parents up at night, and they don’t come with chatbot-friendly answers.
发表回复