OpenAI Tightens Teen Safety Rules on ChatGPT
OpenAI announced sweeping new safety measures for teenagers using ChatGPT, just hours before a Senate Judiciary Committee hearing on the risks of AI chatbots.
The company will introduce an age-prediction system designed to separate users into two versions of the app: one for adolescents aged 13 to 17, and another for adults.
CEO Sam Altman acknowledged the tradeoff between privacy and safety, saying OpenAI will sometimes ask adults for ID verification in certain countries but will always err on the side of caution for minors.
Teens will get a filtered experience that blocks harmful content, such as instructions for suicide, while still allowing creative uses for adults.
New parental controls are also coming by the end of the month, allowing parents to manage features like memory, response settings, and blackout hours.
OpenAI also pledged to intervene if flagged underage users show signs of suicidal thoughts, attempting to notify parents or, if necessary, contacting authorities.
The move reflects mounting pressure from lawmakers and families, including a recent lawsuit claiming ChatGPT encouraged harmful behavior.