OpenAI unveils open-source tools focused on teen safety for AI developers.

OpenAI unveils open-source tools focused on teen safety for AI developers.

      OpenAI has spent the past year dealing with lawsuits from the families of young individuals who died following extensive interactions with ChatGPT. In response, the company is now working to equip developers who build applications on its models with tools to prevent similar issues from arising.

      On Tuesday, OpenAI announced it is releasing a collection of open-source, prompt-based safety policies aimed at assisting developers in creating AI applications that are safer for teenagers. These policies are intended for use with gpt-oss-safeguard, OpenAI's open-weight safety model, but are structured as prompts that can also be utilized with other models.

      What the policies address

      The prompts focus on five categories of potential harm that AI systems can pose to younger users: graphic violence and sexual content, harmful body ideals and behaviors, dangerous activities and challenges, romantic or violent role play, and age-restricted goods and services. Developers can incorporate these policies into their systems instead of developing teen safety rules from scratch, a process OpenAI recognizes can often be mishandled even by seasoned teams.

      OpenAI collaborated with Common Sense Media, a prominent child safety advocacy organization, and everyone.ai, an AI safety consultancy, to develop these policies. Robbie Torney, the head of AI and digital assessments at Common Sense Media, noted that the prompt-based approach aims to create a baseline within the developer community that can be adapted and enhanced over time, given that the policies are open source.

      Context is key

      This initiative does not occur in isolation. OpenAI is currently facing at least eight lawsuits claiming that ChatGPT played a role in the deaths of users, including 16-year-old Adam Raine, who died by suicide in April 2025 after extensive engagement with the chatbot. Legal documents disclosed that ChatGPT referenced suicide over 1,200 times in Raine's interactions and flagged many messages for self-harm content, yet failed to end a session or notify anyone. Three additional suicides and four instances described as AI-induced psychotic episodes have also led to legal action against the company.

      In reaction to these cases, OpenAI implemented parental controls and age-prediction features in late 2025, and in December, it updated its Model Spec—internal guidelines that govern the behavior of its large language models—to include specific protections for users under 18. The open-source safety policies introduced this week aim to broaden this initiative beyond OpenAI's own products and into the wider developer ecosystem.

      A baseline, not a solution

      OpenAI has made it clear that these policies are not a complete answer to the challenge of ensuring AI safety for young users. They provide what the company refers to as a "meaningful safety floor," rather than the full range of protections applied to its own products. This differentiation is significant. No model's safeguards are wholly impervious, as the lawsuits have illustrated. Users, including teenagers, have frequently discovered ways to circumvent safety features through persistent inquiries and inventive prompting.

      The open-source method is an attempt to ensure that basic safety policies are widely shared rather than forcing every developer to start from scratch, especially smaller teams and independent developers who may not have the resources to create comprehensive safety systems independently. The effectiveness of these policies will depend on their adoption, how diligently developers incorporate them, and whether they can withstand the types of prolonged, adversarial interactions that have previously revealed vulnerabilities in ChatGPT’s safety layers.

      The underlying issue persists

      What OpenAI offers is essentially a set of instructions: well-crafted prompts that guide a model's behavior when engaging with younger users. This is a practical contribution but does not tackle the fundamental issue raised by regulators, parents, and safety advocates for years: AI systems capable of maintaining emotionally engaging conversations with minors might need more than just improved prompts. They may necessitate entirely different architectures or external monitoring systems that operate outside the model itself.

      For now, however, a downloadable collection of teen safety policies is what is available. It is not insignificant. Whether it suffices is a question that courts, regulators, and future headlines will ultimately determine.

OpenAI unveils open-source tools focused on teen safety for AI developers.

Other articles

Energy Vault purchases a 175 MW battery storage project located near Dallas. Energy Vault purchases a 175 MW battery storage project located near Dallas. Energy Vault acquired a 175 MW battery storage project located in ERCOT North from Belltown Power, furthering its 1,500 MW deployment strategy and AI infrastructure. macOS 26.4 has been released, featuring minor enhancements and a significant indication regarding future developments. macOS 26.4 has been released, featuring minor enhancements and a significant indication regarding future developments. macOS 26.4 has been released, bringing new emoji, battery charge limits, adjustments to Safari, and performance improvements, although it does not include significant upgrades for Siri AI at this time. How BNESIM leverages AI to transform travel eSIM and enhance global connectivity. How BNESIM leverages AI to transform travel eSIM and enhance global connectivity. BNESIM discusses the impact of AI on travel eSIM, customer support, fraud prevention, and global connectivity in the fast-changing telecommunications environment. Don't overlook the M4 MacBook Air: featuring 24GB of RAM, a 15-inch Retina display, and a $300 discount due to the M5 launch. Don't overlook the M4 MacBook Air: featuring 24GB of RAM, a 15-inch Retina display, and a $300 discount due to the M5 launch. Apple has released its M5 MacBook Air, leading to the anticipated outcome of the M4 15-inch model now being priced at $1,299 on Amazon, which represents a $300 discount from its original price of $1,599. If you’ve been on the lookout for a great time to purchase a MacBook Air, this is your chance. The M4 chip remains one of the […] Don't overlook the M4 MacBook Air: it features 24GB of RAM, a 15-inch Retina display, and you can save $300 due to the M5 release. Don't overlook the M4 MacBook Air: it features 24GB of RAM, a 15-inch Retina display, and you can save $300 due to the M5 release. Apple has released its M5 MacBook Air, leading to a typical consequence: the M4 15-inch model is now available for $1,299 on Amazon, a $300 discount from its original price of $1,599. If you're looking for an ideal time to purchase a MacBook Air, now is the moment. The M4 chip remains one of the […] iOS 26.4 has arrived, featuring Playlist Playground, new emoji, and a fix for Family Sharing issues. iOS 26.4 has arrived, featuring Playlist Playground, new emoji, and a fix for Family Sharing issues. iOS 26.4 may not change the game, but it subtly addresses issues that have needed attention for a long time — and it also introduces an AI playlist creator that is truly enjoyable to use.

OpenAI unveils open-source tools focused on teen safety for AI developers.

OpenAI introduced safety guidelines focused on prompts for developers creating AI applications intended for teenagers, addressing issues such as violence, self-harm, and age-restricted material.