Today, we are introducing our European Youth Safety Blueprint and the first recipients of our EMEA Youth & Wellbeing Grant. Both are part of our ongoing effort to help ensure young people can benefit from AI in ways that are age-appropriate and support their development and wellbeing.
## Our European Blueprint for Youth Safety
To ensure young people can fully benefit from AI, Europe needs an approach that is practical, evidence-led, and focused on how young people actually use AI. We are publishing our European Youth Safety Blueprint, which sets out five pillars for policymakers who want to strengthen protections for young people in the age of AI while preserving access to tools that support learning, creativity, and opportunity.
The Blueprint focuses on practical measures including responsible AI adoption in education, age-appropriate experiences supported by safeguards and privacy-preserving age assurance, under-18 safety policies to identify and mitigate risks, protections against manipulative or deceptive AI outputs, and common standards for accessible parental controls.
These are not the only answers, but we hope they are a useful contribution to ongoing efforts of getting this right.
> “Today’s young people will be the first generation to grow up with AI as part of everyday life, shaping how they learn, create, and prepare for the future. Getting this right matters and we look forward to working with European policymakers and civil society towards that goal.”
Ann O’Leary, VP Global Policy, OpenAI
## Introducing our EMEA Youth & Wellbeing Grant recipients
Advancing youth safety will take more than just policy recommendations. We will also support organisations working directly with young people, families, educators, and communities to better understand what safe and beneficial AI use looks like in practice.
We are excited to announce 12 recipients of our EMEA Youth & Wellbeing Grant. Launched in January, the program totalling €500,000 is intended to support NGOs and research organizations across Europe, the Middle East and Africa working on youth safety, wellbeing, and AI.
The chosen recipients are conducting practical work and independent research that will help define what safe, responsible AI looks like in the real world. This grant funding will support a range of projects, including critical youth wellbeing services, mental health support, improved AI literacy, age assurance research, and frontline resources for parents, educators, youth workers, and young people, including those in vulnerable communities.
CIPL East Europe Foundation FSM Luma Mental Health Innovations OPEN Parent Zone Teen Turn Telefono Azzurro UNICRI
> “Getting age assurance right is key to balancing children’s privacy and safety with fair access to the digital world. Through our multistakeholder dialogue, CIPL has helped move this challenge from debate to practical, accountable policy proposals. We are excited for the opportunity to build on that momentum and deepen our research into effective, trusted AI-supported age assurance.”
Natascha Gerlach, Director Privacy and Data Policy, Centre for Information Policy Leadership (CIPL)
## Supporting our broader youth safety work
These latest contributions build on OpenAI’s broader global approach to youth safety, including our under-18 principles for model behavior, age prediction model, parental controls, and resources for families. Our approach is informed by experts, including the Expert Council on Well-Being and AI and Global Physician Network.
In Europe, we are working with governments and institutions through Education for Countries to deploy AI responsibly in education and with Estonia’s University of Tartu to support research measuring AI learning outcomes. We were a founding member of the Beneficial AI for Children coalition, and joined leaders at the Vatican in backing a declaration(opens in a new window) on children’s rights and dignity in AI.
Youth safety is ongoing work. We’re committed to making strong teen protections and improving them over time to better support teens and families.
Running Codex safely at OpenAI Security May 8, 2026
Introducing Trusted Contact in ChatGPT Safety May 7, 2026
How ChatGPT learns about the world while protecting privacy Global Affairs May 6, 2026
Our Research * Research Index * Research Overview * Research Residency * Economic Research
Latest Advancements * GPT-5.5 * GPT-5.4 * GPT-5.3 Instant * GPT-5.3-Codex
Safety * Safety Approach * Security & Privacy * Trust & Transparency
ChatGPT * Explore ChatGPT(opens in a new window) * Business * Enterprise * Education * Pricing(opens in a new window) * Download(opens in a new window)
API Platform * Platform Overview * Pricing * API log in(opens in a new window) * Documentation(opens in a new window) * Developer Forum(opens in a new window)
For Business * Business Overview * Solutions * Contact Sales
Company * About Us * Our Charter * Foundation(opens in a new window) * Careers * Brand
Support * Help Center(opens in a new window)
More * News * Stories * Academy * Livestreams * Podcast * RSS
Terms & Policies * Terms of Use * Privacy Policy * Other Policies
(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)
OpenAI © 2015–2026 Your privacy choices
English United States