Artificial Intelligence

Study recommends that minors under 18 avoid Google Gemini due to “high risk”

  • September 9, 2025
  • 3 min read
Study recommends that minors under 18 avoid Google Gemini due to “high risk”

A security report prepared by Common Sense Media has concluded that children under 18 years of age should avoid using Google Gemini models, since their version designed for minors treats them in the same way as the idea for adults, so it has described the company’s artificial intelligence assistant (AI) as a tool of high risk.

Common Sense Media is a non-profit organization dedicated to child safety, which analyzes and offers qualifications about different products to help parents be informed about the content and technologies their children consume.

In this sense, after evaluating Google Gemini, the organization has issued a report in which it recommends that children under 18 avoid using Google Gemini Teen, the version of Google’s assistant for adolescents, since it treats these users almost the same as adults, although it says they have security protections and content designed for children and children under the age of 18.

In particular, Common Sense Media has assured that this approach ignores that younger adolescents need a different orientation when dealing with these technological innovations, so it has considered that Gemini has a “high risk” for minors, both in its version for children under 13 years of age and in the format for adolescents.

The company has explained that, in its analysis, it demonstrated that Gemini could share “inappropriate and potentially dangerous” content with younger users, such as sexual content or related to drugs and alcohol. In addition, the report has also found that Gemini’s version for adolescents provides – too easily – emotional and mental health support too easily, poorly, and fails to recognize severe symptoms of mental health deterioration.

Thus, although the model responds “appropriately” to the day-to-day requests, Common Sense Media has indicated that it quickly falters in longer conversations and with more subtle and nuanced requests – which, he said, is how teenagers actually interact with chatbots.

For all this, Common Sense Media has detailed that it recommends that no child under 5 use AI chatbots, and that children between the ages of 6 and 12 use them only under the supervision of an adult. He has also nuanced that the independent use of chatbots is safe for adolescents aged 13 to 17, but only for schoolwork and creative projects. He also stressed that he continues to reject any minor using chatbots for accompaniment, including mental health and emotional support.

Google response

In response to the publication of this report, Google has assured TechCrunch that it has specific policies and security measures for minors, in order to prevent “duncle” results, as well as that it consults with external experts to improve its protections.

However, the technology giant has admitted that some of Gemini’s answers are not working as planned, and has therefore recently added additional security measures to address this problem.

Google has also claimed that the report appears to have referred to functions not available to users under 18 and has not had access to the questions that the Common Sense Media organization used in its tests.

The case of OpenAI

This report has been known for a few days after OpenAI was forced to change ChatGPT’s security measures following the suicide of a teenager, a case in which the parents of the minor sued the company led by Sam Altman for the role the chatbot had played.

The company announced that it would make changes to its AI models so that they would be able to better identify situations of mental and emotional crisis during conversations with ChatGPT, with new safeguards, as well as more effective content blocks and greater agility in contact with help services and family members.

About Author

Susan Flanagan