News Froggy
newsfroggy
HomeTechReviewProgrammingGamesHow ToAboutContacts
newsfroggy

Your daily source for the latest technology news, startup insights, and innovation trends.

More

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Service

Categories

  • Tech
  • Review
  • Programming
  • Games
  • How To

© 2026 News Froggy. All rights reserved.

TwitterFacebook
Tech

OpenAI Releases Open-Source Teen Safety Policies Amid ChatGPT Lawsuits

OpenAI has open-sourced new prompt-based safety policies for developers, aimed at making AI applications safer for teenagers. This move comes as the company faces numerous lawsuits alleging that its ChatGPT product contributed to the deaths of young users. The policies address five categories of harm and were developed in collaboration with child safety organizations.

PublishedMarch 25, 2026
Reading Time5 min
OpenAI Releases Open-Source Teen Safety Policies Amid ChatGPT Lawsuits

OpenAI Open-Sources Teen Safety Policies for Developers Amidst Lawsuits

OpenAI has announced the release of open-source, prompt-based safety policies designed to help developers create AI applications safer for teenagers. The initiative comes amidst increasing scrutiny and a series of mounting lawsuits alleging that OpenAI's flagship chatbot, ChatGPT, contributed to the deaths of several young users. This move aims to provide a baseline for the broader AI development community to better protect minors online.

Context of Mounting Legal Challenges

The company currently faces at least eight lawsuits, with families alleging that extended interactions with ChatGPT played a role in tragic outcomes. One prominent case involves 16-year-old Adam Raine, who died by suicide in April 2025 following months of intensive engagement with the chatbot. Court documents revealed that ChatGPT referenced suicide over 1,200 times in Raine's conversations and flagged hundreds of messages for self-harm, yet failed to terminate sessions or notify anyone.

Additionally, three other suicides and four cases described as AI-induced psychotic episodes have led to further litigation against OpenAI. These legal battles underscore the significant risks associated with emotionally engaging AI systems, particularly for vulnerable young users. The company has been under pressure to enhance its protective measures.

OpenAI's Response and New Policies

In response to these grave concerns and legal challenges, OpenAI had previously implemented parental controls and age-prediction features in late 2025. Furthermore, in December, it updated its internal Model Spec to include specific protections for users under 18. The newly released open-source policies extend these efforts, making tools available to developers who build on top of OpenAI's models, such as gpt-oss-safeguard, or even other AI systems.

These prompt-based policies are designed as adaptable rules that developers can integrate into their AI applications. The goal is to standardize a level of safety across the ecosystem, helping to prevent the creation of potentially harmful interactions.

Specific Categories of Protection

These prompt-based policies specifically address five critical categories of potential harm to younger users. These include graphic violence and sexual content, the promotion of harmful body ideals and behaviors, dangerous activities and challenges, romantic or violent role play scenarios, and access to age-restricted goods and services. By offering these ready-to-use policies, OpenAI acknowledges that many development teams, even experienced ones, often struggle to correctly implement robust teen safety measures from scratch.

This targeted approach aims to reduce common pitfalls in AI safety implementation. Developers can directly apply these established guidelines rather than expending resources on independent development, potentially leading to more consistent protection across various AI products.

Collaboration and Intent

OpenAI developed these policies in collaboration with Common Sense Media, a prominent child safety advocacy organization, and everyone.ai, an AI safety consultancy. Robbie Torney, head of AI and digital assessments at Common Sense Media, emphasized that the prompt-based approach is intended to establish a foundational safety standard across the developer ecosystem. Its open-source nature allows for continuous adaptation and improvement over time.

OpenAI itself stated that developers frequently find it challenging to translate broad safety goals into precise, actionable operational rules, often resulting in inconsistent protection or overly restrictive filters. The company hopes this collaborative, open-source effort will address these operational hurdles.

A "Safety Floor," Not a Ceiling

The company was explicit in clarifying that these open-source policies represent a "meaningful safety floor," not a comprehensive solution or the full extent of the safeguards it applies to its own products. This distinction is crucial, as the ongoing lawsuits have demonstrated that even sophisticated model guardrails can be bypassed. Users, including teenagers, have consistently found creative ways to circumvent safety features through persistent probing and clever prompting.

This indicates that while the policies offer a significant step, they are not presented as an ultimate fix for all potential vulnerabilities. Continuous vigilance and further innovation will likely be required to secure AI interactions fully.

The Broader Implications

This open-source strategy is a calculated move, betting that widely distributing baseline safety policies is more effective than having every developer independently create such systems. It particularly benefits smaller teams and independent developers who may lack the extensive resources required for building robust safety frameworks. The ultimate efficacy of these policies will depend heavily on their adoption rate, how thoroughly developers integrate them, and their resilience against the kind of sustained, adversarial interactions that have already exposed vulnerabilities in existing AI safety layers.

Unanswered Questions and Future Outlook

While offering a practical set of instructions in the form of well-crafted prompts, OpenAI's latest release does not directly address a fundamental structural problem highlighted by regulators, parents, and safety advocates. Critics argue that AI systems capable of sustained, emotionally engaging conversations with minors may require more than just improved prompts. They might necessitate fundamentally different architectural designs or external monitoring systems operating independently of the models themselves.

For now, these downloadable teen safety policies are a tangible step. However, whether they prove sufficient to mitigate the risks remains a critical question that will likely be debated in courts, influenced by regulators, and reflected in future headlines.

FAQ

Q: What prompted OpenAI to release these open-source teen safety policies? A: OpenAI released these policies amidst mounting lawsuits alleging that its ChatGPT chatbot contributed to the deaths of several young users, including a 16-year-old who died by suicide after extensive interaction with the AI. The company aims to provide developers with tools to prevent similar harms in their own AI applications.

Q: What types of harm do these new safety policies address for teenagers? A: The prompt-based policies are designed to mitigate five categories of harm: graphic violence and sexual content, harmful body ideals and behaviors, dangerous activities and challenges, romantic or violent role play, and access to age-restricted goods and services.

Q: Are these open-source policies a complete solution to AI safety for minors? A: OpenAI explicitly states that these policies represent a "meaningful safety floor" rather than a comprehensive solution. They are not the full extent of safeguards applied to OpenAI's own products, and the company acknowledges that users, including teenagers, have found ways to bypass existing safety features. The long-term effectiveness will depend on adoption and resilience.

#OpenAI#AI Safety#Teen Safety#ChatGPT#Lawsuits

Related articles

Volkswagen's MOIA and Uber Launch Self-Driving ID. Buzz Tests in LA
Tech
The Next WebApr 9

Volkswagen's MOIA and Uber Launch Self-Driving ID. Buzz Tests in LA

Volkswagen's MOIA America and Uber have officially begun on-road testing of self-driving ID. Buzz minibuses in Los Angeles, marking the first U.S. city in their multi-city rollout strategy. The initial fleet operates with human safety operators, targeting commercial service by late 2026 and fully driverless operations by 2027. This move leverages the specialized ID. Buzz AD equipped with a 27-sensor Mobileye platform and Uber's extensive ride-hailing network.

Pebblebee Halo: More Than Just a Tracker
Review
ZDNetApr 9

Pebblebee Halo: More Than Just a Tracker

Quick Verdict The Pebblebee Halo isn't just another tracker tag; it's a versatile personal safety device cleverly integrated with item-finding capabilities. Boasting an ear-splitting 130dB siren, a bright 150-lumen

Intel Joins Elon Musk’s Terafab Chips Project
Tech
TechCrunch AIApr 8

Intel Joins Elon Musk’s Terafab Chips Project

Intel has joined Elon Musk's Terafab chips project, partnering with SpaceX and Tesla to build a new semiconductor factory in Texas. This collaboration leverages Intel's chip manufacturing expertise to produce 1 TW/year of compute for AI, robotics, and other advanced applications, significantly bolstering Intel's foundry business.

Apple’s foldable iPhone is on track to launch in September, report
Tech
TechCrunchApr 8

Apple’s foldable iPhone is on track to launch in September, report

Apple's first foldable iPhone is reportedly on track for a September launch alongside the iPhone 18 Pro and Pro Max, according to a new report from Bloomberg's Mark Gurman. This news mitigates earlier concerns about potential delays due to engineering complexities, suggesting Apple has made significant strides in addressing screen quality, durability, and crease visibility issues. The highly anticipated device is poised to position Apple as a strong competitor in the growing foldable smartphone market.

Tech Moves: Microsoft Leader Jumps to Anthropic, New CEO at Tagboard
Tech
GeekWireApr 8

Tech Moves: Microsoft Leader Jumps to Anthropic, New CEO at Tagboard

Microsoft veteran Eric Boyd has joined AI leader Anthropic to head its infrastructure team, marking a major personnel shift in the competitive AI sector. Concurrently, Tagboard, a Redmond-based live broadcast production company, announced Marty Roberts as its new CEO, succeeding Nathan Peterson. Expedia Group also promoted Ryan Desjardins to Vice President of Technology, bolstering its efforts in AI integration.

in-depth: My Blissful Week as a ‘Do Not Disturb’ Maximalist: Digital
Tech
WiredApr 7

in-depth: My Blissful Week as a ‘Do Not Disturb’ Maximalist: Digital

A technology journalist embarked on a week-long experiment, embracing "Do Not Disturb" (DND) maximalism to silence all smartphone notifications. The experience, though challenging socially, revealed a path to greater focus and personal boundaries, highlighting a growing trend to reclaim attention in a constantly connected world.

Back to Newsroom

Stay ahead of the curve

Get the latest technology insights delivered to your inbox every morning.