Meta to Let Parents Control Teens’ AI Chats After Safety Backlash

Meta Tightens Teen Safety Rules After AI Chatbot Criticism

Meta is introducing new parental controls to help families manage how teens interact with AI chatbots on Instagram. The move comes after widespread criticism over the company’s flirty and inappropriate chatbot conversations with minors.

Parents Get Power to Disable Teen AI Chats

Starting early next year, parents will be able to turn off private chats between teens and Meta’s AI characters. The tools will first roll out in the U.S., U.K., Canada, and Australia, according to a company blog post shared by Instagram head Adam Mosseri and Chief AI Officer Alexandr Wang.

Meta said parents can also block specific AI characters and view broad topics their teens discuss with chatbots. However, they won’t see full conversations. The company added that its AI assistant will remain active with age-appropriate defaults even if private AI chats are disabled.

Meta Adopts PG-13 Ratings for AI Experiences

Earlier this week, Meta announced that its AI features for teens will follow the PG-13 movie rating system. This guideline aims to restrict teens from accessing explicit or harmful AI responses. The decision follows reports of inappropriate conversations between Meta’s AI and young users.

The new system is part of a larger effort to rebuild trust and strengthen safety on its platforms after mounting public and regulatory pressure.

Regulators and Safety Concerns Rising

U.S. regulators have increased scrutiny on AI companies over their potential risks to minors. In August, a Reuters report revealed that Meta’s AI systems allowed provocative exchanges with minors, raising alarm among parents and lawmakers.

Meta said its chatbots are programmed not to engage in discussions around self-harm, suicide, or disordered eating. The company also mentioned it uses AI-based signals to automatically protect users who may be underage, even if they register as adults.

Industry Trend Toward Parental Supervision

Meta isn’t alone in tightening AI safety. In September, OpenAI introduced parental controls for ChatGPT on web and mobile apps. That move came after a lawsuit involving a teenager’s suicide linked to chatbot interactions.

Experts say that these steps show a growing acknowledgment that AI must evolve responsibly, especially when used by younger audiences.

A Step Toward Safer Social Media

While questions remain about enforcement, Meta’s new controls mark another step toward making AI safer for teens. The company said it continues to test features that help parents oversee online experiences without fully removing access to AI tools.

As social media platforms expand their AI offerings, the pressure to balance innovation and protection is higher than ever.

113 Views