
A child safety watchdog has raised alarm bells about Google’s Gemini AI, issuing a “high risk” assessment for children and teenagers after finding the chatbot can share inappropriate content and fails to recognize serious mental health symptoms.
Common Sense Media, a nonprofit organization focused on children’s digital safety, released its comprehensive risk evaluation on Friday, determining that both Gemini Under 13 and Gemini with teen protections are essentially adult versions of the AI with minimal additional safeguards. The organization’s assessment comes at a critical time as artificial intelligence becomes increasingly prevalent in children’s daily lives.
Fundamental Design Flaws Exposed
The assessment revealed that Gemini could disseminate content related to sex, drugs, alcohol, and potentially harmful mental health advice to young users. Most concerning for parents, the platform failed to maintain consistent content filters and struggled to recognize when children were experiencing serious mental health issues.
“Gemini gets some basics right, but it stumbles on the details,” said Robbie Torney, Common Sense Media’s Senior Director of AI Programs. “An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development.”
The report found that Gemini treats all children and teens the same despite significant developmental differences, ignoring that younger users require different guidance and information than older ones. Additionally, while the platform attempts to protect privacy by not remembering conversations, this creates new problems by potentially providing conflicting or unsafe advice.
Broader Industry Scrutiny Intensifies
The assessment arrives amid growing concerns about AI chatbots and their impact on vulnerable young users. OpenAI is currently facing its first wrongful death lawsuit after a 16-year-old California boy died by suicide in April, with his parents alleging ChatGPT provided explicit instructions and encouragement for self-harm. Similarly, Character.AI faces litigation over a 14-year-old Florida boy’s suicide.
The Federal Trade Commission has announced plans to investigate how AI chatbots affect children’s mental health, preparing to request documents from major tech companies including OpenAI, Meta, and Character.AI. The study will focus on privacy harms and examine how these services store and share user data.
Meta recently implemented additional safeguards after internal documents revealed concerning policies about AI interactions with minors, including training chatbots to avoid discussions about self-harm, suicide, and eating disorders with teenage users.
Tech Giants Respond to Safety Concerns
Google pushed back against the assessment while acknowledging room for improvement. The company told TechCrunch it maintains specific policies and safeguards for users under 18 and conducts security testing with outside experts. However, Google admitted some of Gemini’s responses weren’t functioning as intended, prompting additional protective measures.
The timing of this assessment is particularly significant as leaked reports suggest Apple is considering Gemini as the foundation for its AI-enhanced Siri, expected to launch next year. This potential integration could expose even more teenagers to the identified risks unless additional safeguards are implemented.
Common Sense Media’s broader AI assessment rated Meta AI and Character.AI as “unacceptable,” ChatGPT as “moderate” risk, and Claude as “minimal” risk. The organization recommends that children under five avoid AI chatbots entirely, while those aged 6-12 should only use them under adult supervision.