Filters alone aren’t enough
Common Sense Media, a nonprofit focused on child safety, has rated Google’s AI assistant Gemini as high risk for children and teens. According to the report, the versions for users under 13 and with teen protections are essentially adult models with added filters rather than platforms built specifically for younger users.
Exposure to unsafe content
Despite added safeguards, Gemini can still provide inappropriate or unsafe material, including content related to sex, drugs, alcohol, and mental health advice. While the AI clearly states that it is a computer and not a friend, it uniformly treats younger users, ignoring the developmental differences between children and teens.
AI and teen mental health concerns
There is growing concern about the impact of artificial intelligence on the mental health of adolescents. Recent lawsuits against OpenAI and Character.AI claim that interaction with chatbots has contributed to tragic cases among young people, which emphasises the need for stricter security measures and children’s design standards.
Apple integration raises stakes
According to the leak of reports, Apple can integrate Gemini into its upcoming AI version of Siri. This will create potential risks for millions of young users if security issues are not addressed in a timely manner. Experts warn that without strict restrictions and control, such AI systems can intentionally increase anxiety, depression and dependence on digital assistants among adolescents.
The integration of advanced AI into everyday devices raises questions about data confidentiality, possible psychological impact and the need for educational programs for parents and children on the safe use of AI.
Google’s response
Google responded by emphasizing existing safeguards for users under 18 and regular consultation with external experts. The company acknowledged that some AI responses were not working as intended and added new protections to address these gaps.
Expert recommendations
Common Sense Media recommends that children under 5 should not use AI chatbots, children aged 6–12 should only use them under adult supervision, and teens 13–17 may use AI independently for schoolwork and creative projects. The organization also advises against using AI for companionship or emotional support for anyone under 18.
“Gemini gets some basics right, but it stumbles on the details,” said Robbie Torney, Senior Director of AI Programs at Common Sense Media. “AI for kids must be designed with their developmental needs in mind, not just as a modified version of an adult product.”
Conclusion
Google Gemini is not yet safe for children and teenagers, filters only partially protect, and content and interaction can negatively affect the psyche. Experts recommend age restrictions, adult control, and the creation of AI that takes into account the child’s development, especially if the technology is integrated into popular devices.
For more information about our AI products and solutions for your business, contact us today. We’re here to help!