OpenAI launches open-source safety tools for teenagers aimed at AI developers.

OpenAI launches open-source safety tools for teenagers aimed at AI developers.

      OpenAI has spent the past year dealing with lawsuits from the families of young individuals who tragically died after prolonged interactions with ChatGPT. In response, the company is working on providing developers with tools to prevent similar issues.

      On Tuesday, OpenAI announced the release of a collection of open-source, prompt-based safety policies aimed at helping developers create safer AI applications for teenagers. These policies are meant to be used together with gpt-oss-safeguard, OpenAI’s open-weight safety model, but they can also work with other models as they are designed as prompts.

      What the policies encompass

      The prompts address five types of potential harm that AI systems can pose to younger users: graphic violence and sexual content, harmful body ideals and behaviors, risky activities and challenges, romantic or violent role-play, and access to age-restricted goods and services. Developers can easily incorporate these policies into their systems instead of having to create teen safety rules from the ground up, a process OpenAI noted that even skilled teams often mismanage.

      OpenAI crafted the policies in partnership with Common Sense Media, a leading child safety advocacy group, and everyone.ai, an AI safety consultancy. Robbie Torney, the head of AI and digital assessments at Common Sense Media, stated that the prompt-based method aims to establish a baseline across the developer community, which can be adjusted and refined over time since the policies are open source.

      The context behind the release

      This announcement is not made in isolation. OpenAI is currently facing at least eight lawsuits claiming that ChatGPT contributed to the deaths of users, such as 16-year-old Adam Raine, who died by suicide in April 2025 following months of extensive interaction with the chatbot. Court documents indicated that ChatGPT mentioned suicide over 1,200 times during Raine’s conversations and flagged hundreds of messages related to self-harm but failed to terminate the session or notify anyone. There are three additional suicides and four instances deemed as AI-induced psychotic episodes that have also led to litigation against the company.

      In reaction to these cases, OpenAI introduced parental controls and age-prediction features in late 2025, and in December, updated its Model Spec—the internal guidelines that dictate how its large language models operate—to include explicit protections for users under 18. The newly announced open-source safety policies expand this initiative beyond OpenAI’s own products into the broader development community.

      A foundational, not an exhaustive, approach

      OpenAI clarified that these policies do not provide a complete solution to the challenge of making AI safe for young users. They represent what the company describes as a “meaningful safety floor,” rather than the full range of protections applied to its own products. This distinction is significant. No model's safeguards are entirely foolproof, as the lawsuits have shown. Users, including teens, have found ways to circumvent safety features through persistent exploration and inventive prompting.

      The open-source model is a bet that widely distributing basic safety policies is preferable to having every developer start from scratch, especially for smaller teams and independent developers who may not have the resources to build comprehensive safety systems. The effectiveness of these policies will depend on their uptake, the degree to which developers implement them, and whether they can withstand the types of sustained, adversarial interactions that have previously revealed vulnerabilities in ChatGPT’s safety measures.

      The more challenging question remains

      What OpenAI is providing is a framework, a set of well-designed prompts directing a model on how to behave when interacting with younger users. While this is a practical step, it does not tackle the fundamental issue that regulators, parents, and safety advocates have highlighted for years: that AI systems designed for prolonged, emotionally engaging conversations with minors may require more than just improved prompts. They may need entirely different architectures or external monitoring mechanisms separate from the model itself.

      For the moment, however, a downloadable set of teen safety policies is what is available. This is a step forward, but whether it suffices will be determined by the courts, regulators, and future news coverage.

OpenAI launches open-source safety tools for teenagers aimed at AI developers.

Other articles

Domino's launches enhanced AI tracking and real-time activities to improve order transparency. Domino's launches enhanced AI tracking and real-time activities to improve order transparency. Your iPhone will display real-time pizza updates without the need to open the app. iOS 26.4 has arrived, featuring Playlist Playground, new emojis, and a fix for Family Sharing issues. iOS 26.4 has arrived, featuring Playlist Playground, new emojis, and a fix for Family Sharing issues. iOS 26.4 doesn't change the game, but it subtly addresses issues that should have been resolved long ago — and it also introduces an AI playlist generator that's truly enjoyable to use. BlueConic has partnered with Databricks Marketplace to enhance real-time marketing. BlueConic has partnered with Databricks Marketplace to enhance real-time marketing. BlueConic's Customer Growth Engine is now available on the Databricks Marketplace, enabling businesses to utilize AI model results for real-time marketing without the need to transfer data. BlueConic partners with Databricks Marketplace for marketing in real time. BlueConic partners with Databricks Marketplace for marketing in real time. BlueConic's Customer Growth Engine is now available on the Databricks Marketplace, allowing businesses to utilize AI model outputs for immediate marketing without the need to transfer data. OpenAI launches open-source tools for AI developers focused on teen safety. OpenAI launches open-source tools for AI developers focused on teen safety. OpenAI has introduced prompt-based safety guidelines for developers creating AI applications intended for teenagers, addressing issues related to violence, self-harm, and content that is age-restricted. Samsung introduces its new 2026 televisions featuring Mini LED enhancements and more advanced AI. Samsung introduces its new 2026 televisions featuring Mini LED enhancements and more advanced AI. Samsung has introduced its 2026 television lineup featuring new Neo QLED and Mini LED models, emphasizing AI-driven picture improvements and offering a wider range of price options.

OpenAI launches open-source safety tools for teenagers aimed at AI developers.

OpenAI has introduced prompt-based safety guidelines for developers creating AI applications aimed at teenagers, addressing issues related to violence, self-harm, and content restricted by age.