chatgpt-gets-parental-controls

ChatGPT gets parental controls

NotebookLM just went mobile, giving you a smarter way to learn, organize, and listen. Anytime, anywhere.

By
Adil

Oct 15, 2025

Copy link
Share on X
Share on LinkedIn
Share on Instagram
Share via Facebook
G

UARDRAILS

ChatGPT gets parental controls

AI and teenagers have something in common: They can be unpredictable.

Looking to reign in both, OpenAI on Monday launched parental controls for ChatGPT, allowing parents and teens to link their accounts to limit, monitor and manage how the chatbot is used. The AI giant launched these controls in partnership with Common Sense Media and other advocacy groups, as well as the attorneys general of California and Delaware.

Parents now can control a number of settings on their teens’ accounts, including:

  • Setting quiet hours, removing voice mode and image generation capabilities, turning off chatGPT’s ability to save memories and opting out of model training.
  • OpenAI will also automatically limit “graphic content, viral challenges, sexual, romantic or violent role play, and extreme beauty ideals” for teen accounts.


Introducing parental controls in ChatGPT.



Now parents and teens can link accounts to automatically get stronger safeguards for teens. Parents also gain tools to adjust features & set limits that work for their family.



Rolling out to all ChatGPT users today on web, mobile soon.





— OpenAI (@OpenAI)
10:08 AM • Sep 29, 2025

If OpenAI’s tech detects something is “seriously wrong,” such as recognizing signs of self harm or “acute distress,” parents will be notified immediately unless they have opted out. In more serious cases, such as signs of imminent danger, OpenAI is working on a process to contact emergency services.

These guardrails come on the heels of a lawsuit alleging that OpenAI’s ChatGPT is responsible for the death of a 16-year-old boy, whose parents claim he was using the chatbot to explore suicide methods.

These safeguards highlight that an increasing amount of teens turn to AI for companionship. A July Common Sense Media survey of more than 1,000 teens found that 72% reported using AI companions, with 33% relying on these companions for emotional support, friendship or romantic interactions.

Robbie Torney, senior director of AI programs at Common Sense Media, said in a statement that safeguards like these are “just one piece of the puzzle” in safe AI use.

In its announcement, OpenAI said these measures will “iterate and improve over time,” noting that it’s working on an age prediction system that it announced in mid-September. “Guardrails help, but they’re not foolproof and can be bypassed if someone is intentionally trying to get around them.”