OpenAI said on Monday that it would add new parental controls to ChatGPT, the most widely used artificial intelligence chatbot, in response to growing concerns about the technology’s risks for young users. The move comes weeks after a California family filed a lawsuit alleging the chatbot contributed to their teenage son’s death.
The new settings allow parents to link accounts with their children, restrict late-night use, disable voice and image features, and decide whether ChatGPT retains past conversations.
In rare cases, the company said, its systems will try to detect when a child shows “signs of acute distress” and alert a review team, though OpenAI acknowledged that the system is imperfect and “might sometimes raise an alarm when there isn’t real danger.”
The changes follow a wrongful-death suit brought by Matt and Maria Raine, who say their son Adam died by suicide in April after months of conversations with ChatGPT.
The family alleges the chatbot reinforced his most harmful thoughts rather than discouraging them. Their lawyer, Jay Edelson, accused the company and its chief executive, Sam Altman, of rushing out a new version of the model “despite clear safety issues.” OpenAI has not responded publicly to the litigation.
Safety groups said the measures are an overdue recognition of the risks surrounding generative AI, though they cautioned against overreliance.
Robbie Torney, a senior director at Common Sense Media, called the parental controls “a good starting point,” but said they work best when paired with ongoing conversations about responsible use and “active involvement in understanding what their teen is doing online.”
The rollout is the latest sign of how AI companies are scrambling to establish guardrails even as their products spread rapidly into classrooms, workplaces, and homes. OpenAI has said it is exploring ways to identify underage users more reliably, including asking for ID in some countries, though researchers note that such safeguards are often easy to circumvent.




