Meta will train its AI to avoid sensitive conversations with minors

Meta announced a reinforcement of the way it will train its artificial intelligence systems to prevent minors from interacting with topics considered high-risk, such as self-harm, suicide, or eating disorders. The company also confirmed that it will restrict any romantic conversations between its chatbots and teenagers, and that it will direct these users to specialized resources when situations of vulnerability are detected.
The decision comes after it acknowledged that Meta AI models allowed younger users to engage in dialogue on sensitive issues, including self-destructive behaviors or the possibility of having romantic ties to chatbots. The company admitted that this opening had been a mistake and announced additional security measures to correct it.
According to Meta spokeswoman Stephanie Otway, the company is strengthening the protections as the technology evolves and the number of teens using these tools increases. As he explained to TechCrunch, the goal is to ensure safer and more age-appropriate interactions for users.
Among the measures implemented is the training of AI models so that they do not participate in sensitive conversations with adolescents and instead direct them to sources of professional support. In addition, children’s access to certain sexualized chatbots will be limited on platforms such as Instagram and Facebook.
With these restrictions, users under the age of 18 will only be able to interact with chatbots designed to promote education and creativity, Otway confirmed.

Corrective action on Meta chatbots
For its part, Meta clarified that the changes applied in the training of its chatbots correspond to interim security measures, with the intention of implementing stricter and more durable policies in the future that prioritize the protection of minors and adolescents.
According to research published by Reuters, the company produced an internal document of more than 200 pages detailing the rules for the operation of its AI chatbots. These guidelines would have allowed “romantic or sensual” conversations between artificial intelligence systems and underage users. The report, entitled GenAI: Content Risk Standards, was supported by the legal, public policy, engineering, and even the company’s ethics director.
The above-mentioned document noted, for example, that it is acceptable to describe a child in terms of evidence of his attractiveness. At the same time, it stated that explicit sexual language should not be used to describe children under the age of 13.
A U.S. senator ordered an investigation to be opened, and 44 state prosecutors asked several artificial intelligence companies, including Meta, to urgently strengthen the protection of minors on their platforms.
Faced with this pressure, Meta pledged to train its generational AI models to avoid inappropriate conversations with teenagers, redirect them to professional support services, and restrict access to chatbots with sexualized characteristics. The company has insisted that these measures are of a transitional nature and that they will be reinforced by stronger policies in the future.