Announcing the OpenAI Safety Fellowship

Announcing the OpenAI Safety Fellowship

Today we are announcing a call for applications to the OpenAI Safety Fellowship, a new program for external researchers, engineers, and practitioners to pursue rigorous, high-impact research on the safety and alignment of advanced AI systems. The program will run from September 14, 2026 through February 5, 2027.

We are looking for applicants interested in safety questions that matter for existing and future systems. Priority areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains, among others. We are especially interested in work that is empirically grounded, technically strong, and relevant to the broader research community.

Fellows will work closely with OpenAI mentors and engage with a cohort of peers. Workspace will be available in Berkeley alongside other fellows at Constellation⁠(opens in a new window), though fellows may also work remotely. Fellows are expected to produce a substantial research output by the end of the program, such as a paper, benchmark, or dataset. The fellowship includes a monthly stipend, compute support, and ongoing mentorship.

We welcome applicants from a range of backgrounds, including computer science, social science, cybersecurity, privacy, HCI, and related fields. We prioritize research ability, technical judgment, and execution over specific credentials. Letters of reference will be required.

For additional information regarding eligibility, compensation and benefits, see the application form⁠(opens in a new window). Fellows will receive API credits and other resources as appropriate, but will not have internal system access.

Applications are now open here⁠(opens in a new window) and will close May 3. We will review all submissions and notify successful applicants by July 25. For any questions about the application process, please contact [email protected]⁠.

Introducing the OpenAI Safety Bug Bounty program Safety Mar 25, 2026

Helping developers build safer AI experiences for teens Safety Mar 24, 2026

Creating with Sora safely Safety Mar 23, 2026

Our Research * Research Index * Research Overview * Research Residency * OpenAI for Science * Economic Research

Latest Advancements * GPT-5.3 Instant * GPT-5.3-Codex * GPT-5 * Codex

Safety * Safety Approach * Security & Privacy * Trust & Transparency

ChatGPT * Explore ChatGPT(opens in a new window) * Business * Enterprise * Education * Pricing(opens in a new window) * Download(opens in a new window)

Sora * Sora Overview * Features * Pricing * Sora log in(opens in a new window)

API Platform * Platform Overview * Pricing * API log in(opens in a new window) * Documentation(opens in a new window) * Developer Forum(opens in a new window)

For Business * Business Overview * Solutions * Contact Sales

Company * About Us * Our Charter * Foundation(opens in a new window) * Careers * Brand

Support * Help Center(opens in a new window)

More * News * Stories * Livestreams * Podcast * RSS

Terms & Policies * Terms of Use * Privacy Policy * Other Policies

(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)

OpenAI © 2015–2026 Manage Cookies

English United States

This editorial summary reflects OpenAI and other public reporting on Announcing the OpenAI Safety Fellowship.

Reviewed by WTGuru editorial team.