Meta Bolsters Parental Controls for AI in Wake of FTC Inquiry
October 21, 2025
Meta Platforms is adding new safety features to provide parents more control as to how children — most pointedly teens — interact with AI chatbots and characters. The move follows the September launch of an FTC investigation of Meta and five other companies on the potentially harmful effects of their AI on children and teens. Meta’s new guardrails will allow parents to turn off one-on-one chats with AI characters completely or just block specific characters. They’ll also be able to glean insight into topics their underage household members are discussing with AI, including Meta’s own AI assistant.
“Meta is still building the controls, and the company said they will start to roll out early next year,” writes CNBC, noting that “Meta has long faced criticism over its handling of child safety and mental health on its apps.”

In a news post, Meta says it has already started introducing the changes in English in the U.S., UK, Canada and Australia, and will continue updating them.
The company previously announced “age-appropriate protections” for AI, like designing them “to give teens responses guided by PG-13 movie ratings,” Meta says, adding that the new safety features are in addition to the automatic safeguards built into Teen Accounts.
The new safeguards include:
- AI characters designed to not engage in age-inappropriate discussions about self-harm, suicide, or disordered eating with teens.
- Teens can only interact with a limited group of AI characters, focused on age-appropriate topics like education, sports, and hobbies.
- Parents can see if their teens are chatting with AI characters and set time limits on app use, to as little as 15 minutes per day overall.
Anticipating teens may try to get around the protections, Meta is “also using AI technology to place those we suspect are teens into these protections, even if they tell us they’re adults,” the company says.

“In the past few weeks, multiple platforms, including OpenAI, Meta, and YouTube, have released tools and controls focused on teen safety,” TechCrunch reports, noting that the changes “come amid growing concerns about the impact of social media on teen mental health and lawsuits against AI companies that allege they played a part in teen suicides.”
They also follow the FTC investigation into Meta and its Instagram platform, as well as Alphabet, xAI, Snap, OpenAI and Character.AI. As reported last month by CNBC, the agency is “seeking information about how these companies monetize user engagement, develop and approve characters [and] use or share personal information” regarding AI characters.
Related:
Meta to Give Teens’ Parents More Control After Criticism Over Flirty AI Chatbots, Reuters, 10/17/25
Meta’s Upcoming AI Parental Controls Are Too Little, Too Late, Lifehacker, 10/17/25
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.