OpenAI Announces Plans for New ChatGPT Parental Controls
September 5, 2025
Following a California teen’s suicide after months of conversation about it with ChatGPT and a wrongful death lawsuit filed by his parents against OpenAI, the company says it will introduce parental controls “within the next month.” New safeguards include parents being able to “control how ChatGPT responds to their teen” and “receive notifications when the system detects their teen is in a moment of acute distress.” OpenAI says it has recently introduced a real-time router that can redirect “sensitive conversations” to its GPT-5 thinking and o3 reasoning models, engineered to respond with greater contextual awareness than efficiency-focused chat models.
The New York Times says the parental control feature is one “OpenAI’s developer community has been requesting for more than a year,” pointing out that chatbots from Google and Meta Platforms already have parental controls.
“What OpenAI described sounds more granular, similar to the parental controls introduced by Character.AI, a company with role-playing chatbots,” following a lawsuit filed against that company by a Florida mother after her son’s suicide. “On Character.AI, teenagers must send an invitation to a guardian to monitor their accounts,” NYT explains.
TechCrunch writes that parents will “be able to disable features like memory and chat history, which experts say could lead to delusional thinking and other problematic behavior,” adding that perhaps the most important planned improvement is that parents can elect to “receive notifications when the system detects their teenager is in a moment of ‘acute distress.’”
After nearly three years of public availability, OpenAI is seeing people turn to ChatGPT “in the most difficult of moments,” prompting the company “to improve how our models recognize and respond to signs of mental and emotional distress,” the model-maker said this week in a news post.
Last month OpenAI addressed safety issues, detailing a “stack of layered safeguards” already built into ChatGPT that it is working to improve by:
- Expanding interventions to more people in crisis
- Making it easier to reach emergency services and get expert help
- Escalating risks of physical harm for human review
- Strengthening protections for teens
- Improving safeguards in long conversations
OpenAI describes this new round of improvements as a “120-day initiative” guided by experts on well-being and mental health on the company’s Expert Council on Well-Being and AI and Global Physician Network.
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.